sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget$/translations/zh_CN/mm/process_addrsmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget$/translations/zh_TW/mm/process_addrsmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget$/translations/it_IT/mm/process_addrsmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget$/translations/ja_JP/mm/process_addrsmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget$/translations/ko_KR/mm/process_addrsmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget$/translations/sp_SP/mm/process_addrsmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhcomment)}(h SPDX-License-Identifier: GPL-2.0h]h SPDX-License-Identifier: GPL-2.0}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhhh>/var/lib/git/docbuild/linux/Documentation/mm/process_addrs.rsthKubhsection)}(hhh](htitle)}(hProcess Addressesh]hProcess Addresses}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubhcompound)}(hhh]htoctree)}(hhh]h}(h]h ]h"]h$]h&]hmm/process_addrsentries] includefiles]maxdepthKcaptionNglobhidden includehiddennumberedK titlesonly rawentries]uh1hhhhKhhubah}(h]h ]toctree-wrapperah"]h$]h&]uh1hhhhhhhhNubh paragraph)}(hUserland memory ranges are tracked by the kernel via Virtual Memory Areas or 'VMA's of type :c:struct:`!struct vm_area_struct`.h](h`Userland memory ranges are tracked by the kernel via Virtual Memory Areas or ‘VMA’s of type }(hhhhhNhNubhliteral)}(h":c:struct:`!struct vm_area_struct`h]hstruct vm_area_struct}(hhhhhNhNubah}(h]h ](xrefcc-structeh"]h$]h&]uh1hhhubh.}(hhhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK hhhhubh)}(hXEach VMA describes a virtually contiguous memory range with identical attributes, each described by a :c:struct:`!struct vm_area_struct` object. Userland access outside of VMAs is invalid except in the case where an adjacent stack VMA could be extended to contain the accessed address.h](hfEach VMA describes a virtually contiguous memory range with identical attributes, each described by a }(hjhhhNhNubh)}(h":c:struct:`!struct vm_area_struct`h]hstruct vm_area_struct}(hjhhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhjubh object. Userland access outside of VMAs is invalid except in the case where an adjacent stack VMA could be extended to contain the accessed address.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hAll VMAs are contained within one and only one virtual address space, described by a :c:struct:`!struct mm_struct` object which is referenced by all tasks (that is, threads) which share the virtual address space. We refer to this as the :c:struct:`!mm`.h](hUAll VMAs are contained within one and only one virtual address space, described by a }(hj6hhhNhNubh)}(h:c:struct:`!struct mm_struct`h]hstruct mm_struct}(hj>hhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhj6ubh{ object which is referenced by all tasks (that is, threads) which share the virtual address space. We refer to this as the }(hj6hhhNhNubh)}(h:c:struct:`!mm`h]hmm}(hjQhhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhj6ubh.}(hj6hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hnEach mm object contains a maple tree data structure which describes all VMAs within the virtual address space.h]hnEach mm object contains a maple tree data structure which describes all VMAs within the virtual address space.}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubhnote)}(hAn exception to this is the 'gate' VMA which is provided by architectures which use :c:struct:`!vsyscall` and is a global static object which does not belong to any specific mm.h]h)}(hAn exception to this is the 'gate' VMA which is provided by architectures which use :c:struct:`!vsyscall` and is a global static object which does not belong to any specific mm.h](hXAn exception to this is the ‘gate’ VMA which is provided by architectures which use }(hj~hhhNhNubh)}(h:c:struct:`!vsyscall`h]hvsyscall}(hjhhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhj~ubhH and is a global static object which does not belong to any specific mm.}(hj~hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjzubah}(h]h ]h"]h$]h&]uh1jxhhhhhhhNubh)}(hhh](h)}(hLockingh]hLocking}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK!ubh)}(hThe kernel is designed to be highly scalable against concurrent read operations on VMA **metadata** so a complicated set of locks are required to ensure memory corruption does not occur.h](hWThe kernel is designed to be highly scalable against concurrent read operations on VMA }(hjhhhNhNubhstrong)}(h **metadata**h]hmetadata}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhW so a complicated set of locks are required to ensure memory corruption does not occur.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK#hjhhubjy)}(hwLocking VMAs for their metadata does not have any impact on the memory they describe nor the page tables that map them.h]h)}(hwLocking VMAs for their metadata does not have any impact on the memory they describe nor the page tables that map them.h]hwLocking VMAs for their metadata does not have any impact on the memory they describe nor the page tables that map them.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK'hjubah}(h]h ]h"]h$]h&]uh1jxhjhhhhhNubh)}(hhh](h)}(h Terminologyh]h Terminology}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK+ubh bullet_list)}(hhh](h list_item)}(h**mmap locks** - Each MM has a read/write semaphore :c:member:`!mmap_lock` which locks at a process address space granularity which can be acquired via :c:func:`!mmap_read_lock`, :c:func:`!mmap_write_lock` and variants.h]h)}(h**mmap locks** - Each MM has a read/write semaphore :c:member:`!mmap_lock` which locks at a process address space granularity which can be acquired via :c:func:`!mmap_read_lock`, :c:func:`!mmap_write_lock` and variants.h](j)}(h**mmap locks**h]h mmap locks}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh& - Each MM has a read/write semaphore }(hj hhhNhNubh)}(h:c:member:`!mmap_lock`h]h mmap_lock}(hj"hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj ubhN which locks at a process address space granularity which can be acquired via }(hj hhhNhNubh)}(h:c:func:`!mmap_read_lock`h]hmmap_read_lock()}(hj5hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubh, }(hj hhhNhNubh)}(h:c:func:`!mmap_write_lock`h]hmmap_write_lock()}(hjHhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubh and variants.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK-hjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hX**VMA locks** - The VMA lock is at VMA granularity (of course) which behaves as a read/write semaphore in practice. A VMA read lock is obtained via :c:func:`!lock_vma_under_rcu` (and unlocked via :c:func:`!vma_end_read`) and a write lock via :c:func:`!vma_start_write` (all VMA write locks are unlocked automatically when the mmap write lock is released). To take a VMA write lock you **must** have already acquired an :c:func:`!mmap_write_lock`.h]h)}(hX**VMA locks** - The VMA lock is at VMA granularity (of course) which behaves as a read/write semaphore in practice. A VMA read lock is obtained via :c:func:`!lock_vma_under_rcu` (and unlocked via :c:func:`!vma_end_read`) and a write lock via :c:func:`!vma_start_write` (all VMA write locks are unlocked automatically when the mmap write lock is released). To take a VMA write lock you **must** have already acquired an :c:func:`!mmap_write_lock`.h](j)}(h **VMA locks**h]h VMA locks}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubh - The VMA lock is at VMA granularity (of course) which behaves as a read/write semaphore in practice. A VMA read lock is obtained via }(hjkhhhNhNubh)}(h:c:func:`!lock_vma_under_rcu`h]hlock_vma_under_rcu()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjkubh (and unlocked via }(hjkhhhNhNubh)}(h:c:func:`!vma_end_read`h]hvma_end_read()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjkubh) and a write lock via }(hjkhhhNhNubh)}(h:c:func:`!vma_start_write`h]hvma_start_write()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjkubhu (all VMA write locks are unlocked automatically when the mmap write lock is released). To take a VMA write lock you }(hjkhhhNhNubj)}(h**must**h]hmust}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubh have already acquired an }(hjkhhhNhNubh)}(h:c:func:`!mmap_write_lock`h]hmmap_write_lock()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjkubh.}(hjkhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK0hjgubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hX**rmap locks** - When trying to access VMAs through the reverse mapping via a :c:struct:`!struct address_space` or :c:struct:`!struct anon_vma` object (reachable from a folio via :c:member:`!folio->mapping`). VMAs must be stabilised via :c:func:`!anon_vma_[try]lock_read` or :c:func:`!anon_vma_[try]lock_write` for anonymous memory and :c:func:`!i_mmap_[try]lock_read` or :c:func:`!i_mmap_[try]lock_write` for file-backed memory. We refer to these locks as the reverse mapping locks, or 'rmap locks' for brevity. h]h)}(hX**rmap locks** - When trying to access VMAs through the reverse mapping via a :c:struct:`!struct address_space` or :c:struct:`!struct anon_vma` object (reachable from a folio via :c:member:`!folio->mapping`). VMAs must be stabilised via :c:func:`!anon_vma_[try]lock_read` or :c:func:`!anon_vma_[try]lock_write` for anonymous memory and :c:func:`!i_mmap_[try]lock_read` or :c:func:`!i_mmap_[try]lock_write` for file-backed memory. We refer to these locks as the reverse mapping locks, or 'rmap locks' for brevity.h](j)}(h**rmap locks**h]h rmap locks}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh@ - When trying to access VMAs through the reverse mapping via a }(hjhhhNhNubh)}(h!:c:struct:`!struct address_space`h]hstruct address_space}(hjhhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhjubh or }(hjhhhNhNubh)}(h:c:struct:`!struct anon_vma`h]hstruct anon_vma}(hjhhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhjubh$ object (reachable from a folio via }(hjhhhNhNubh)}(h:c:member:`!folio->mapping`h]hfolio->mapping}(hj+hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubh). VMAs must be stabilised via }(hjhhhNhNubh)}(h":c:func:`!anon_vma_[try]lock_read`h]hanon_vma_[try]lock_read()}(hj>hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh or }hjsbh)}(h#:c:func:`!anon_vma_[try]lock_write`h]hanon_vma_[try]lock_write()}(hjQhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh for anonymous memory and }(hjhhhNhNubh)}(h :c:func:`!i_mmap_[try]lock_read`h]hi_mmap_[try]lock_read()}(hjdhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh or }(hjhhhNhNubh)}(h!:c:func:`!i_mmap_[try]lock_write`h]hi_mmap_[try]lock_write()}(hjwhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubho for file-backed memory. We refer to these locks as the reverse mapping locks, or ‘rmap locks’ for brevity.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK6hjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]bullet*uh1jhhhK-hjhhubh)}(hFWe discuss page table locks separately in the dedicated section below.h]hFWe discuss page table locks separately in the dedicated section below.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK>hjhhubh)}(hThe first thing **any** of these locks achieve is to **stabilise** the VMA within the MM tree. That is, guaranteeing that the VMA object will not be deleted from under you nor modified (except for some specific fields described below).h](hThe first thing }(hjhhhNhNubj)}(h**any**h]hany}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh of these locks achieve is to }(hjhhhNhNubj)}(h **stabilise**h]h stabilise}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh the VMA within the MM tree. That is, guaranteeing that the VMA object will not be deleted from under you nor modified (except for some specific fields described below).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK@hjhhubh)}(hFStabilising a VMA also keeps the address space described by it around.h]hFStabilising a VMA also keeps the address space described by it around.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKEhjhhubeh}(h] terminologyah ]h"] terminologyah$]h&]uh1hhjhhhhhK+ubh)}(hhh](h)}(h Lock usageh]h Lock usage}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKHubh)}(hjIf you want to **read** VMA metadata fields or just keep the VMA stable, you must do one of the following:h](hIf you want to }(hjhhhNhNubj)}(h**read**h]hread}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhS VMA metadata fields or just keep the VMA stable, you must do one of the following:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKJhjhhubj)}(hhh](j)}(hObtain an mmap read lock at the MM granularity via :c:func:`!mmap_read_lock` (or a suitable variant), unlocking it with a matching :c:func:`!mmap_read_unlock` when you're done with the VMA, *or*h]h)}(hObtain an mmap read lock at the MM granularity via :c:func:`!mmap_read_lock` (or a suitable variant), unlocking it with a matching :c:func:`!mmap_read_unlock` when you're done with the VMA, *or*h](h3Obtain an mmap read lock at the MM granularity via }(hj,hhhNhNubh)}(h:c:func:`!mmap_read_lock`h]hmmap_read_lock()}(hj4hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj,ubh7 (or a suitable variant), unlocking it with a matching }(hj,hhhNhNubh)}(h:c:func:`!mmap_read_unlock`h]hmmap_read_unlock()}(hjGhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj,ubh" when you’re done with the VMA, }(hj,hhhNhNubhemphasis)}(h*or*h]hor}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jZhj,ubeh}(h]h ]h"]h$]h&]uh1hhhhKMhj(ubah}(h]h ]h"]h$]h&]uh1jhj%hhhhhNubj)}(hTry to obtain a VMA read lock via :c:func:`!lock_vma_under_rcu`. This tries to acquire the lock atomically so might fail, in which case fall-back logic is required to instead obtain an mmap read lock if this returns :c:macro:`!NULL`, *or*h]h)}(hTry to obtain a VMA read lock via :c:func:`!lock_vma_under_rcu`. This tries to acquire the lock atomically so might fail, in which case fall-back logic is required to instead obtain an mmap read lock if this returns :c:macro:`!NULL`, *or*h](h"Try to obtain a VMA read lock via }(hjzhhhNhNubh)}(h:c:func:`!lock_vma_under_rcu`h]hlock_vma_under_rcu()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjzubh. This tries to acquire the lock atomically so might fail, in which case fall-back logic is required to instead obtain an mmap read lock if this returns }(hjzhhhNhNubh)}(h:c:macro:`!NULL`h]hNULL}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjzubh, }(hjzhhhNhNubj[)}(h*or*h]hor}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jZhjzubeh}(h]h ]h"]h$]h&]uh1hhhhKPhjvubah}(h]h ]h"]h$]h&]uh1jhj%hhhhhNubj)}(hAcquire an rmap lock before traversing the locked interval tree (whether anonymous or file-backed) to obtain the required VMA. h]h)}(h~Acquire an rmap lock before traversing the locked interval tree (whether anonymous or file-backed) to obtain the required VMA.h]h~Acquire an rmap lock before traversing the locked interval tree (whether anonymous or file-backed) to obtain the required VMA.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKThjubah}(h]h ]h"]h$]h&]uh1jhj%hhhhhNubeh}(h]h ]h"]h$]h&]jjuh1jhhhKMhjhhubh)}(hIf you want to **write** VMA metadata fields, then things vary depending on the field (we explore each VMA field in detail below). For the majority you must:h](hIf you want to }(hjhhhNhNubj)}(h **write**h]hwrite}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh VMA metadata fields, then things vary depending on the field (we explore each VMA field in detail below). For the majority you must:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKWhjhhubj)}(hhh](j)}(hObtain an mmap write lock at the MM granularity via :c:func:`!mmap_write_lock` (or a suitable variant), unlocking it with a matching :c:func:`!mmap_write_unlock` when you're done with the VMA, *and*h]h)}(hObtain an mmap write lock at the MM granularity via :c:func:`!mmap_write_lock` (or a suitable variant), unlocking it with a matching :c:func:`!mmap_write_unlock` when you're done with the VMA, *and*h](h4Obtain an mmap write lock at the MM granularity via }(hjhhhNhNubh)}(h:c:func:`!mmap_write_lock`h]hmmap_write_lock()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh7 (or a suitable variant), unlocking it with a matching }(hjhhhNhNubh)}(h:c:func:`!mmap_write_unlock`h]hmmap_write_unlock()}(hj"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh" when you’re done with the VMA, }(hjhhhNhNubj[)}(h*and*h]hand}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jZhjubeh}(h]h ]h"]h$]h&]uh1hhhhKZhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hObtain a VMA write lock via :c:func:`!vma_start_write` for each VMA you wish to modify, which will be released automatically when :c:func:`!mmap_write_unlock` is called.h]h)}(hObtain a VMA write lock via :c:func:`!vma_start_write` for each VMA you wish to modify, which will be released automatically when :c:func:`!mmap_write_unlock` is called.h](hObtain a VMA write lock via }(hjShhhNhNubh)}(h:c:func:`!vma_start_write`h]hvma_start_write()}(hj[hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjSubhL for each VMA you wish to modify, which will be released automatically when }(hjShhhNhNubh)}(h:c:func:`!mmap_write_unlock`h]hmmap_write_unlock()}(hjnhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjSubh is called.}(hjShhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK]hjOubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hIf you want to be able to write to **any** field, you must also hide the VMA from the reverse mapping by obtaining an **rmap write lock**. h]h)}(hIf you want to be able to write to **any** field, you must also hide the VMA from the reverse mapping by obtaining an **rmap write lock**.h](h#If you want to be able to write to }(hjhhhNhNubj)}(h**any**h]hany}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhL field, you must also hide the VMA from the reverse mapping by obtaining an }(hjhhhNhNubj)}(h**rmap write lock**h]hrmap write lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK`hjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1jhhhKZhjhhubh)}(hXVMA locks are special in that you must obtain an mmap **write** lock **first** in order to obtain a VMA **write** lock. A VMA **read** lock however can be obtained without any other lock (:c:func:`!lock_vma_under_rcu` will acquire then release an RCU lock to lookup the VMA for you).h](h6VMA locks are special in that you must obtain an mmap }(hjhhhNhNubj)}(h **write**h]hwrite}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh lock }(hjhhhNhNubj)}(h **first**h]hfirst}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh in order to obtain a VMA }(hjhhhNhNubj)}(h **write**h]hwrite}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh lock. A VMA }(hjhhhNhNubj)}(h**read**h]hread}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh6 lock however can be obtained without any other lock (}(hjhhhNhNubh)}(h:c:func:`!lock_vma_under_rcu`h]hlock_vma_under_rcu()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubhB will acquire then release an RCU lock to lookup the VMA for you).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKchjhhubh)}(hThis constrains the impact of writers on readers, as a writer can interact with one VMA while a reader interacts with another simultaneously.h]hThis constrains the impact of writers on readers, as a writer can interact with one VMA while a reader interacts with another simultaneously.}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhjhhubjy)}(hThe primary users of VMA read locks are page fault handlers, which means that without a VMA write lock, page faults will run concurrent with whatever you are doing.h]h)}(hThe primary users of VMA read locks are page fault handlers, which means that without a VMA write lock, page faults will run concurrent with whatever you are doing.h]hThe primary users of VMA read locks are page fault handlers, which means that without a VMA write lock, page faults will run concurrent with whatever you are doing.}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKkhjFubah}(h]h ]h"]h$]h&]uh1jxhjhhhhhNubh)}(h Examining all valid lock states:h]h Examining all valid lock states:}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKohjhhubhtable)}(hhh]htgroup)}(hhh](hcolspec)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jvhjsubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhjsubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jvhjsubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhjsubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhjsubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jvhjsubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jvhjsubhthead)}(hhh]hrow)}(hhh](hentry)}(hhh]h)}(h mmap lockh]h mmap lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKthjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hVMA lockh]hVMA lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKthjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h rmap lockh]h rmap lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKthjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hStable?h]hStable?}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKthjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hRead?h]hRead?}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKthj&ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Write most?h]h Write most?}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKthj=ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Write all?h]h Write all?}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKthjTubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhjsubhtbody)}(hhh](j)}(hhh](j)}(hhh]h)}(h\-h]h-}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKvhjubah}(h]h ]h"]h$]h&]uh1jhj|ubj)}(hhh]h)}(h\-h]h-}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKvhjubah}(h]h ]h"]h$]h&]uh1jhj|ubj)}(hhh]h)}(h\-h]h-}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKvhjubah}(h]h ]h"]h$]h&]uh1jhj|ubj)}(hhh]h)}(hNh]hN}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKvhjubah}(h]h ]h"]h$]h&]uh1jhj|ubj)}(hhh]h)}(hjh]hN}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKvhjubah}(h]h ]h"]h$]h&]uh1jhj|ubj)}(hhh]h)}(hjh]hN}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKvhjubah}(h]h ]h"]h$]h&]uh1jhj|ubj)}(hhh]h)}(hjh]hN}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKvhjubah}(h]h ]h"]h$]h&]uh1jhj|ubeh}(h]h ]h"]h$]h&]uh1jhjyubj)}(hhh](j)}(hhh]h)}(h\-h]h-}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhj&ubah}(h]h ]h"]h$]h&]uh1jhj#ubj)}(hhh]h)}(hRh]hR}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhj=ubah}(h]h ]h"]h$]h&]uh1jhj#ubj)}(hhh]h)}(h\-h]h-}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhjTubah}(h]h ]h"]h$]h&]uh1jhj#ubj)}(hhh]h)}(hYh]hY}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhjkubah}(h]h ]h"]h$]h&]uh1jhj#ubj)}(hhh]h)}(hjph]hY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhjubah}(h]h ]h"]h$]h&]uh1jhj#ubj)}(hhh]h)}(hjh]hN}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhjubah}(h]h ]h"]h$]h&]uh1jhj#ubj)}(hhh]h)}(hjh]hN}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhjubah}(h]h ]h"]h$]h&]uh1jhj#ubeh}(h]h ]h"]h$]h&]uh1jhjyubj)}(hhh](j)}(hhh]h)}(h\-h]h-}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h\-h]h-}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hR/Wh]hR/W}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hjph]hY}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhj ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hjph]hY}(hj+ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhj( ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hjh]hN}(hjA hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhj> ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hjh]hN}(hjW hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhjT ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjyubj)}(hhh](j)}(hhh]h)}(hR/Wh]hR/W}(hjv hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhjs ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h)}(h\-/Rh]h-/R}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhj ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h)}(h\-/R/Wh]h-/R/W}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhj ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h)}(hjph]hY}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhj ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h)}(hjph]hY}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhj ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h)}(hjh]hN}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhj ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h)}(hjh]hN}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhj ubah}(h]h ]h"]h$]h&]uh1jhjp ubeh}(h]h ]h"]h$]h&]uh1jhjyubj)}(hhh](j)}(hhh]h)}(hWh]hW}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKzhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hj h]hW}(hj3 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKzhj0 ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h\-/Rh]h-/R}(hjI hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKzhjF ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hjph]hY}(hj` hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKzhj] ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hjph]hY}(hjv hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKzhjs ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hjph]hY}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKzhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hjh]hN}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKzhj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhjyubj)}(hhh](j)}(hhh]h)}(hj h]hW}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hj h]hW}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hj h]hW}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hjph]hY}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hjph]hY}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hjph]hY}(hj/ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hj, ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hjph]hY}(hjE hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hjB ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhjyubeh}(h]h ]h"]h$]h&]uh1jwhjsubeh}(h]h ]h"]h$]h&]colsKuh1jqhjnubah}(h]h ]h"]h$]h&]uh1jlhjhhhhhNubhwarning)}(hXWhile it's possible to obtain a VMA lock while holding an mmap read lock, attempting to do the reverse is invalid as it can result in deadlock - if another task already holds an mmap write lock and attempts to acquire a VMA write lock that will deadlock on the VMA read lock.h]h)}(hXWhile it's possible to obtain a VMA lock while holding an mmap read lock, attempting to do the reverse is invalid as it can result in deadlock - if another task already holds an mmap write lock and attempts to acquire a VMA write lock that will deadlock on the VMA read lock.h]hXWhile it’s possible to obtain a VMA lock while holding an mmap read lock, attempting to do the reverse is invalid as it can result in deadlock - if another task already holds an mmap write lock and attempts to acquire a VMA write lock that will deadlock on the VMA read lock.}(hjw hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK~hjs ubah}(h]h ]h"]h$]h&]uh1jq hjhhhhhNubh)}(hAll of these locks behave as read/write semaphores in practice, so you can obtain either a read or a write lock for each of these.h]hAll of these locks behave as read/write semaphores in practice, so you can obtain either a read or a write lock for each of these.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubjy)}(hXqGenerally speaking, a read/write semaphore is a class of lock which permits concurrent readers. However a write lock can only be obtained once all readers have left the critical region (and pending readers made to wait). This renders read locks on a read/write semaphore concurrent with other readers and write locks exclusive against all others holding the semaphore.h](h)}(hGenerally speaking, a read/write semaphore is a class of lock which permits concurrent readers. However a write lock can only be obtained once all readers have left the critical region (and pending readers made to wait).h]hGenerally speaking, a read/write semaphore is a class of lock which permits concurrent readers. However a write lock can only be obtained once all readers have left the critical region (and pending readers made to wait).}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(hThis renders read locks on a read/write semaphore concurrent with other readers and write locks exclusive against all others holding the semaphore.h]hThis renders read locks on a read/write semaphore concurrent with other readers and write locks exclusive against all others holding the semaphore.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubeh}(h]h ]h"]h$]h&]uh1jxhjhhhhhNubh)}(hhh](h)}(h VMA fieldsh]h VMA fields}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhKubh)}(hWe can subdivide :c:struct:`!struct vm_area_struct` fields by their purpose, which makes it easier to explore their locking characteristics:h](hWe can subdivide }(hj hhhNhNubh)}(h":c:struct:`!struct vm_area_struct`h]hstruct vm_area_struct}(hj hhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhj ubhY fields by their purpose, which makes it easier to explore their locking characteristics:}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj hhubjy)}(hvWe exclude VMA lock-specific fields here to avoid confusion, as these are in effect an internal implementation detail.h]h)}(hvWe exclude VMA lock-specific fields here to avoid confusion, as these are in effect an internal implementation detail.h]hvWe exclude VMA lock-specific fields here to avoid confusion, as these are in effect an internal implementation detail.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jxhj hhhhhNubjm)}(hhh](h)}(hVirtual layout fieldsh]hVirtual layout fields}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubjr)}(hhh](jw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhj ubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK(uh1jvhj ubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jvhj ubj)}(hhh]j)}(hhh](j)}(hhh]h)}(hFieldh]hField}(hjD hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjA ubah}(h]h ]h"]h$]h&]uh1jhj> ubj)}(hhh]h)}(h Descriptionh]h Description}(hj[ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjX ubah}(h]h ]h"]h$]h&]uh1jhj> ubj)}(hhh]h)}(h Write lockh]h Write lock}(hjr hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjo ubah}(h]h ]h"]h$]h&]uh1jhj> ubeh}(h]h ]h"]h$]h&]uh1jhj; ubah}(h]h ]h"]h$]h&]uh1jhj ubjx)}(hhh](j)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_start`h]h)}(hj h]hvm_start}(hj hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj ubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h7Inclusive start virtual address of range VMA describes.h]h7Inclusive start virtual address of range VMA describes.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h"mmap write, VMA write, rmap write.h]h"mmap write, VMA write, rmap write.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_end`h]h)}(hj h]hvm_end}(hj hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj ubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h5Exclusive end virtual address of range VMA describes.h]h5Exclusive end virtual address of range VMA describes.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h"mmap write, VMA write, rmap write.h]h"mmap write, VMA write, rmap write.}(hj+ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj( ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_pgoff`h]h)}(hjM h]hvm_pgoff}(hjO hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjK ubah}(h]h ]h"]h$]h&]uh1hhhhKhjH ubah}(h]h ]h"]h$]h&]uh1jhjE ubj)}(hhh]h)}(hDescribes the page offset into the file, the original page offset within the virtual address space (prior to any :c:func:`!mremap`), or PFN if a PFN map and the architecture does not support :c:macro:`!CONFIG_ARCH_HAS_PTE_SPECIAL`.h](hqDescribes the page offset into the file, the original page offset within the virtual address space (prior to any }(hjl hhhNhNubh)}(h:c:func:`!mremap`h]hmremap()}(hjt hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjl ubh=), or PFN if a PFN map and the architecture does not support }(hjl hhhNhNubh)}(h':c:macro:`!CONFIG_ARCH_HAS_PTE_SPECIAL`h]hCONFIG_ARCH_HAS_PTE_SPECIAL}(hj hhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjl ubh.}(hjl hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhji ubah}(h]h ]h"]h$]h&]uh1jhjE ubj)}(hhh]h)}(h"mmap write, VMA write, rmap write.h]h"mmap write, VMA write, rmap write.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jhjE ubeh}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jwhj ubeh}(h]h ]h"]h$]h&]colsKuh1jqhj ubeh}(h]id1ah ]h"]h$]h&]uh1jlhj hhhhhNubh)}(hThese fields describes the size, start and end of the VMA, and as such cannot be modified without first being hidden from the reverse mapping since these fields are used to locate VMAs within the reverse mapping interval trees.h]hThese fields describes the size, start and end of the VMA, and as such cannot be modified without first being hidden from the reverse mapping since these fields are used to locate VMAs within the reverse mapping interval trees.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubjm)}(hhh](h)}(h Core fieldsh]h Core fields}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubjr)}(hhh](jw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhj ubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK(uh1jvhj ubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhj ubj)}(hhh]j)}(hhh](j)}(hhh]h)}(hFieldh]hField}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Descriptionh]h Description}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Write lockh]h Write lock}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjKubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhj ubjx)}(hhh](j)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_mm`h]h)}(hjyh]hvm_mm}(hj{hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjwubah}(h]h ]h"]h$]h&]uh1hhhhKhjtubah}(h]h ]h"]h$]h&]uh1jhjqubj)}(hhh]h)}(hContaining mm_struct.h]hContaining mm_struct.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjqubj)}(hhh]h)}(h#None - written once on initial map.h]h#None - written once on initial map.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjqubeh}(h]h ]h"]h$]h&]uh1jhjnubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_page_prot`h]h)}(hjh]h vm_page_prot}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hKArchitecture-specific page table protection bits determined from VMA flags.h]hKArchitecture-specific page table protection bits determined from VMA flags.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hmmap write, VMA write.h]hmmap write, VMA write.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjnubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_flags`h]h)}(hj)h]hvm_flags}(hj+hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj'ubah}(h]h ]h"]h$]h&]uh1hhhhKhj$ubah}(h]h ]h"]h$]h&]uh1jhj!ubj)}(hhh]h)}(hwRead-only access to VMA flags describing attributes of the VMA, in union with private writable :c:member:`!__vm_flags`.h](h_Read-only access to VMA flags describing attributes of the VMA, in union with private writable }(hjHhhhNhNubh)}(h:c:member:`!__vm_flags`h]h __vm_flags}(hjPhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjHubh.}(hjHhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjEubah}(h]h ]h"]h$]h&]uh1jhj!ubj)}(hhh]h)}(hN/Ah]hN/A}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjoubah}(h]h ]h"]h$]h&]uh1jhj!ubeh}(h]h ]h"]h$]h&]uh1jhjnubj)}(hhh](j)}(hhh]h)}(h:c:member:`!__vm_flags`h]h)}(hjh]h __vm_flags}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hXPrivate, writable access to VMA flags field, updated by :c:func:`!vm_flags_*` functions.h](h8Private, writable access to VMA flags field, updated by }(hjhhhNhNubh)}(h:c:func:`!vm_flags_*`h]h vm_flags_*()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh functions.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hmmap write, VMA write.h]hmmap write, VMA write.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjnubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_file`h]h)}(hjh]hvm_file}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h}If the VMA is file-backed, points to a struct file object describing the underlying file, if anonymous then :c:macro:`!NULL`.h](hlIf the VMA is file-backed, points to a struct file object describing the underlying file, if anonymous then }(hjhhhNhNubh)}(h:c:macro:`!NULL`h]hNULL}(hj&hhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h#None - written once on initial map.h]h#None - written once on initial map.}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjEubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjnubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_ops`h]h)}(hjjh]hvm_ops}(hjlhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjhubah}(h]h ]h"]h$]h&]uh1hhhhKhjeubah}(h]h ]h"]h$]h&]uh1jhjbubj)}(hhh]h)}(hIf the VMA is file-backed, then either the driver or file-system provides a :c:struct:`!struct vm_operations_struct` object describing callbacks to be invoked on VMA lifetime events.h](hLIf the VMA is file-backed, then either the driver or file-system provides a }(hjhhhNhNubh)}(h(:c:struct:`!struct vm_operations_struct`h]hstruct vm_operations_struct}(hjhhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhjubhB object describing callbacks to be invoked on VMA lifetime events.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjbubj)}(hhh]h)}(h?None - Written once on initial map by :c:func:`!f_ops->mmap()`.h](h&None - Written once on initial map by }(hjhhhNhNubh)}(h:c:func:`!f_ops->mmap()`h]h f_ops->mmap()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjbubeh}(h]h ]h"]h$]h&]uh1jhjnubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_private_data`h]h)}(hjh]hvm_private_data}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h9A :c:member:`!void *` field for driver-specific metadata.h](hA }(hjhhhNhNubh)}(h:c:member:`!void *`h]hvoid *}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubh$ field for driver-specific metadata.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hHandled by driver.h]hHandled by driver.}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj.ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjnubeh}(h]h ]h"]h$]h&]uh1jwhj ubeh}(h]h ]h"]h$]h&]colsKuh1jqhj ubeh}(h]id2ah ]h"]h$]h&]uh1jlhj hhhhhNubh)}(hVThese are the core fields which describe the MM the VMA belongs to and its attributes.h]hVThese are the core fields which describe the MM the VMA belongs to and its attributes.}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubjm)}(hhh](h)}(hConfig-specific fieldsh]hConfig-specific fields}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjmubjr)}(hhh](jw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK!uh1jvhj~ubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhj~ubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK(uh1jvhj~ubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhj~ubj)}(hhh]j)}(hhh](j)}(hhh]h)}(hFieldh]hField}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hConfiguration optionh]hConfiguration option}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Descriptionh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Write lockh]h Write lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhj~ubjx)}(hhh](j)}(hhh](j)}(hhh]h)}(h:c:member:`!anon_name`h]h)}(hj"h]h anon_name}(hj$hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj ubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hCONFIG_ANON_VMA_NAMEh]hCONFIG_ANON_VMA_NAME}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj>ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hXA field for storing a :c:struct:`!struct anon_vma_name` object providing a name for anonymous mappings, or :c:macro:`!NULL` if none is set or the VMA is file-backed. The underlying object is reference counted and can be shared across multiple VMAs for scalability.h](hA field for storing a }(hjXhhhNhNubh)}(h!:c:struct:`!struct anon_vma_name`h]hstruct anon_vma_name}(hj`hhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhjXubh4 object providing a name for anonymous mappings, or }(hjXhhhNhNubh)}(h:c:macro:`!NULL`h]hNULL}(hjshhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjXubh if none is set or the VMA is file-backed. The underlying object is reference counted and can be shared across multiple VMAs for scalability.}(hjXhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjUubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hmmap write, VMA write.h]hmmap write, VMA write.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(h :c:member:`!swap_readahead_info`h]h)}(hjh]hswap_readahead_info}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h CONFIG_SWAPh]h CONFIG_SWAP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h\Metadata used by the swap mechanism to perform readahead. This field is accessed atomically.h]h\Metadata used by the swap mechanism to perform readahead. This field is accessed atomically.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hmmap read, swap-specific lock.h]hmmap read, swap-specific lock.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_policy`h]h)}(hj&h]h vm_policy}(hj(hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj$ubah}(h]h ]h"]h$]h&]uh1hhhhKhj!ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h CONFIG_NUMAh]h CONFIG_NUMA}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjBubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hv:c:type:`!mempolicy` object which describes the NUMA behaviour of the VMA. The underlying object is reference counted.h](h)}(h:c:type:`!mempolicy`h]h mempolicy}(hj`hhhNhNubah}(h]h ](jjc-typeeh"]h$]h&]uh1hhj\ubhb object which describes the NUMA behaviour of the VMA. The underlying object is reference counted.}(hj\hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjYubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hmmap write, VMA write.h]hmmap write, VMA write.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(h:c:member:`!numab_state`h]h)}(hjh]h numab_state}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hCONFIG_NUMA_BALANCINGh]hCONFIG_NUMA_BALANCING}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h:c:type:`!vma_numab_state` object which describes the current state of NUMA balancing in relation to this VMA. Updated under mmap read lock by :c:func:`!task_numa_work`.h](h)}(h:c:type:`!vma_numab_state`h]hvma_numab_state}(hjhhhNhNubah}(h]h ](jjc-typeeh"]h$]h&]uh1hhjubhu object which describes the current state of NUMA balancing in relation to this VMA. Updated under mmap read lock by }(hjhhhNhNubh)}(h:c:func:`!task_numa_work`h]htask_numa_work()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hmmap read, numab-specific lock.h]hmmap read, numab-specific lock.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(h:c:member:`!vm_userfaultfd_ctx`h]h)}(hj5h]hvm_userfaultfd_ctx}(hj7hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj3ubah}(h]h ]h"]h$]h&]uh1hhhhKhj0ubah}(h]h ]h"]h$]h&]uh1jhj-ubj)}(hhh]h)}(hCONFIG_USERFAULTFDh]hCONFIG_USERFAULTFD}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjQubah}(h]h ]h"]h$]h&]uh1jhj-ubj)}(hhh]h)}(hUserfaultfd context wrapper object of type :c:type:`!vm_userfaultfd_ctx`, either of zero size if userfaultfd is disabled, or containing a pointer to an underlying :c:type:`!userfaultfd_ctx` object which describes userfaultfd metadata.h](h+Userfaultfd context wrapper object of type }(hjkhhhNhNubh)}(h:c:type:`!vm_userfaultfd_ctx`h]hvm_userfaultfd_ctx}(hjshhhNhNubah}(h]h ](jjc-typeeh"]h$]h&]uh1hhjkubh[, either of zero size if userfaultfd is disabled, or containing a pointer to an underlying }(hjkhhhNhNubh)}(h:c:type:`!userfaultfd_ctx`h]huserfaultfd_ctx}(hjhhhNhNubah}(h]h ](jjc-typeeh"]h$]h&]uh1hhjkubh- object which describes userfaultfd metadata.}(hjkhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhubah}(h]h ]h"]h$]h&]uh1jhj-ubj)}(hhh]h)}(hmmap write, VMA write.h]hmmap write, VMA write.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhj-ubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jwhj~ubeh}(h]h ]h"]h$]h&]colsKuh1jqhjmubeh}(h]id3ah ]h"]h$]h&]uh1jlhj hhhhhNubh)}(heThese fields are present or not depending on whether the relevant kernel configuration option is set.h]heThese fields are present or not depending on whether the relevant kernel configuration option is set.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubjm)}(hhh](h)}(hReverse mapping fieldsh]hReverse mapping fields}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubjr)}(hhh](jw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK#uh1jvhjubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthK)uh1jvhjubjw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jvhjubj)}(hhh]j)}(hhh](j)}(hhh]h)}(hFieldh]hField}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Descriptionh]h Description}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj3ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Write lockh]h Write lock}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjJubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhjubjx)}(hhh](j)}(hhh](j)}(hhh]h)}(h:c:member:`!shared.rb`h]h)}(hjxh]h shared.rb}(hjzhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjvubah}(h]h ]h"]h$]h&]uh1hhhhKhjsubah}(h]h ]h"]h$]h&]uh1jhjpubj)}(hhh]h)}(hA red/black tree node used, if the mapping is file-backed, to place the VMA in the :c:member:`!struct address_space->i_mmap` red/black interval tree.h](hSA red/black tree node used, if the mapping is file-backed, to place the VMA in the }(hjhhhNhNubh)}(h):c:member:`!struct address_space->i_mmap`h]hstruct address_space->i_mmap}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubh red/black interval tree.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjpubj)}(hhh]h)}(h$mmap write, VMA write, i_mmap write.h]h$mmap write, VMA write, i_mmap write.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjpubeh}(h]h ]h"]h$]h&]uh1jhjmubj)}(hhh](j)}(hhh]h)}(h#:c:member:`!shared.rb_subtree_last`h]h)}(hjh]hshared.rb_subtree_last}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hLMetadata used for management of the interval tree if the VMA is file-backed.h]hLMetadata used for management of the interval tree if the VMA is file-backed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h$mmap write, VMA write, i_mmap write.h]h$mmap write, VMA write, i_mmap write.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjmubj)}(hhh](j)}(hhh]h)}(h:c:member:`!anon_vma_chain`h]h)}(hj;h]hanon_vma_chain}(hj=hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj9ubah}(h]h ]h"]h$]h&]uh1hhhhKhj6ubah}(h]h ]h"]h$]h&]uh1jhj3ubj)}(hhh]h)}(hList of pointers to both forked/CoW’d :c:type:`!anon_vma` objects and :c:member:`!vma->anon_vma` if it is non-:c:macro:`!NULL`.h](h(List of pointers to both forked/CoW’d }(hjZhhhNhNubh)}(h:c:type:`!anon_vma`h]hanon_vma}(hjbhhhNhNubah}(h]h ](jjc-typeeh"]h$]h&]uh1hhjZubh objects and }(hjZhhhNhNubh)}(h:c:member:`!vma->anon_vma`h]h vma->anon_vma}(hjuhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjZubh if it is non-}(hjZhhhNhNubh)}(h:c:macro:`!NULL`h]hNULL}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjZubh.}(hjZhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjWubah}(h]h ]h"]h$]h&]uh1jhj3ubj)}(hhh]h)}(hmmap read, anon_vma write.h]hmmap read, anon_vma write.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhj3ubeh}(h]h ]h"]h$]h&]uh1jhjmubj)}(hhh](j)}(hhh]h)}(h:c:member:`!anon_vma`h]h)}(hjh]hanon_vma}(hjhhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h:c:type:`!anon_vma` object used by anonymous folios mapped exclusively to this VMA. Initially set by :c:func:`!anon_vma_prepare` serialised by the :c:macro:`!page_table_lock`. This is set as soon as any page is faulted in.h](h)}(h:c:type:`!anon_vma`h]hanon_vma}(hjhhhNhNubah}(h]h ](jjc-typeeh"]h$]h&]uh1hhjubhR object used by anonymous folios mapped exclusively to this VMA. Initially set by }(hjhhhNhNubh)}(h:c:func:`!anon_vma_prepare`h]hanon_vma_prepare()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh serialised by the }(hjhhhNhNubh)}(h:c:macro:`!page_table_lock`h]hpage_table_lock}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubh0. This is set as soon as any page is faulted in.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](h)}(hQWhen :c:macro:`NULL` and setting non-:c:macro:`NULL`: mmap read, page_table_lock.h](hWhen }(hj7hhhNhNubh)}(h:c:macro:`NULL`h]h)}(hjAh]hNULL}(hjChhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhj?ubah}(h]h ]h"]h$]h&]refdochٌ refdomainjreftypemacro refexplicitrefwarn reftargetNULLuh1hhhhMhj7ubh and setting non-}(hj7hhhNhNubh)}(h:c:macro:`NULL`h]h)}(hjeh]hNULL}(hjghhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjcubah}(h]h ]h"]h$]h&]refdochٌ refdomainjreftypemacro refexplicitrefwarnj]NULLuh1hhhhMhj7ubh: mmap read, page_table_lock.}(hj7hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj4ubh)}(h\When non-:c:macro:`NULL` and setting :c:macro:`NULL`: mmap write, VMA write, anon_vma write.h](h When non-}(hjhhhNhNubh)}(h:c:macro:`NULL`h]h)}(hjh]hNULL}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]refdochٌ refdomainjreftypemacro refexplicitrefwarnj]NULLuh1hhhhMhjubh and setting }(hjhhhNhNubh)}(h:c:macro:`NULL`h]h)}(hjh]hNULL}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]refdochٌ refdomainjreftypemacro refexplicitrefwarnj]NULLuh1hhhhMhjubh(: mmap write, VMA write, anon_vma write.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj4ubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjmubeh}(h]h ]h"]h$]h&]uh1jwhjubeh}(h]h ]h"]h$]h&]colsKuh1jqhjubeh}(h]id4ah ]h"]h$]h&]uh1jlhj hhhhhNubh)}(hX These fields are used to both place the VMA within the reverse mapping, and for anonymous mappings, to be able to access both related :c:struct:`!struct anon_vma` objects and the :c:struct:`!struct anon_vma` in which folios mapped exclusively to this VMA should reside.h](hThese fields are used to both place the VMA within the reverse mapping, and for anonymous mappings, to be able to access both related }(hjhhhNhNubh)}(h:c:struct:`!struct anon_vma`h]hstruct anon_vma}(hjhhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhjubh objects and the }(hjhhhNhNubh)}(h:c:struct:`!struct anon_vma`h]hstruct anon_vma}(hjhhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhjubh> in which folios mapped exclusively to this VMA should reside.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM hj hhubjy)}(hIf a file-backed mapping is mapped with :c:macro:`!MAP_PRIVATE` set then it can be in both the :c:type:`!anon_vma` and :c:type:`!i_mmap` trees at the same time, so all of these fields might be utilised at once.h]h)}(hIf a file-backed mapping is mapped with :c:macro:`!MAP_PRIVATE` set then it can be in both the :c:type:`!anon_vma` and :c:type:`!i_mmap` trees at the same time, so all of these fields might be utilised at once.h](h(If a file-backed mapping is mapped with }(hj8hhhNhNubh)}(h:c:macro:`!MAP_PRIVATE`h]h MAP_PRIVATE}(hj@hhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhj8ubh set then it can be in both the }(hj8hhhNhNubh)}(h:c:type:`!anon_vma`h]hanon_vma}(hjShhhNhNubah}(h]h ](jjc-typeeh"]h$]h&]uh1hhj8ubh and }(hj8hhhNhNubh)}(h:c:type:`!i_mmap`h]hi_mmap}(hjfhhhNhNubah}(h]h ](jjc-typeeh"]h$]h&]uh1hhj8ubhJ trees at the same time, so all of these fields might be utilised at once.}(hj8hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj4ubah}(h]h ]h"]h$]h&]uh1jxhj hhhhhNubeh}(h] vma-fieldsah ]h"] vma fieldsah$]h&]uh1hhjhhhhhKubeh}(h] lock-usageah ]h"] lock usageah$]h&]uh1hhjhhhhhKHubh)}(hhh](h)}(h Page tablesh]h Page tables}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hXWe won't speak exhaustively on the subject but broadly speaking, page tables map virtual addresses to physical ones through a series of page tables, each of which contain entries with physical addresses for the next page table level (along with flags), and at the leaf level the physical addresses of the underlying physical data pages or a special entry such as a swap entry, migration entry or other special marker. Offsets into these pages are provided by the virtual address itself.h]hXWe won’t speak exhaustively on the subject but broadly speaking, page tables map virtual addresses to physical ones through a series of page tables, each of which contain entries with physical addresses for the next page table level (along with flags), and at the leaf level the physical addresses of the underlying physical data pages or a special entry such as a swap entry, migration entry or other special marker. Offsets into these pages are provided by the virtual address itself.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hIn Linux these are divided into five levels - PGD, P4D, PUD, PMD and PTE. Huge pages might eliminate one or two of these levels, but when this is the case we typically refer to the leaf level as the PTE level regardless.h]hIn Linux these are divided into five levels - PGD, P4D, PUD, PMD and PTE. Huge pages might eliminate one or two of these levels, but when this is the case we typically refer to the leaf level as the PTE level regardless.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM"hjhhubjy)}(hXSIn instances where the architecture supports fewer page tables than five the kernel cleverly 'folds' page table levels, that is stubbing out functions related to the skipped levels. This allows us to conceptually act as if there were always five levels, even if the compiler might, in practice, eliminate any code relating to missing ones.h]h)}(hXSIn instances where the architecture supports fewer page tables than five the kernel cleverly 'folds' page table levels, that is stubbing out functions related to the skipped levels. This allows us to conceptually act as if there were always five levels, even if the compiler might, in practice, eliminate any code relating to missing ones.h]hXWIn instances where the architecture supports fewer page tables than five the kernel cleverly ‘folds’ page table levels, that is stubbing out functions related to the skipped levels. This allows us to conceptually act as if there were always five levels, even if the compiler might, in practice, eliminate any code relating to missing ones.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM&hjubah}(h]h ]h"]h$]h&]uh1jxhjhhhhhNubh)}(hAThere are four key operations typically performed on page tables:h]hAThere are four key operations typically performed on page tables:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM-hjhhubhenumerated_list)}(hhh](j)}(hX!**Traversing** page tables - Simply reading page tables in order to traverse them. This only requires that the VMA is kept stable, so a lock which establishes this suffices for traversal (there are also lockless variants which eliminate even this requirement, such as :c:func:`!gup_fast`).h]h)}(hX!**Traversing** page tables - Simply reading page tables in order to traverse them. This only requires that the VMA is kept stable, so a lock which establishes this suffices for traversal (there are also lockless variants which eliminate even this requirement, such as :c:func:`!gup_fast`).h](j)}(h**Traversing**h]h Traversing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh page tables - Simply reading page tables in order to traverse them. This only requires that the VMA is kept stable, so a lock which establishes this suffices for traversal (there are also lockless variants which eliminate even this requirement, such as }(hjhhhNhNubh)}(h:c:func:`!gup_fast`h]h gup_fast()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM/hjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(h**Installing** page table mappings - Whether creating a new mapping or modifying an existing one in such a way as to change its identity. This requires that the VMA is kept stable via an mmap or VMA lock (explicitly not rmap locks).h]h)}(h**Installing** page table mappings - Whether creating a new mapping or modifying an existing one in such a way as to change its identity. This requires that the VMA is kept stable via an mmap or VMA lock (explicitly not rmap locks).h](j)}(h**Installing**h]h Installing}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh page table mappings - Whether creating a new mapping or modifying an existing one in such a way as to change its identity. This requires that the VMA is kept stable via an mmap or VMA lock (explicitly not rmap locks).}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM3hj&ubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hX**Zapping/unmapping** page table entries - This is what the kernel calls clearing page table mappings at the leaf level only, whilst leaving all page tables in place. This is a very common operation in the kernel performed on file truncation, the :c:macro:`!MADV_DONTNEED` operation via :c:func:`!madvise`, and others. This is performed by a number of functions including :c:func:`!unmap_mapping_range` and :c:func:`!unmap_mapping_pages`. The VMA need only be kept stable for this operation.h]h)}(hX**Zapping/unmapping** page table entries - This is what the kernel calls clearing page table mappings at the leaf level only, whilst leaving all page tables in place. This is a very common operation in the kernel performed on file truncation, the :c:macro:`!MADV_DONTNEED` operation via :c:func:`!madvise`, and others. This is performed by a number of functions including :c:func:`!unmap_mapping_range` and :c:func:`!unmap_mapping_pages`. The VMA need only be kept stable for this operation.h](j)}(h**Zapping/unmapping**h]hZapping/unmapping}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubh page table entries - This is what the kernel calls clearing page table mappings at the leaf level only, whilst leaving all page tables in place. This is a very common operation in the kernel performed on file truncation, the }(hjPhhhNhNubh)}(h:c:macro:`!MADV_DONTNEED`h]h MADV_DONTNEED}(hjfhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjPubh operation via }(hjPhhhNhNubh)}(h:c:func:`!madvise`h]h madvise()}(hjyhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjPubhC, and others. This is performed by a number of functions including }(hjPhhhNhNubh)}(h:c:func:`!unmap_mapping_range`h]hunmap_mapping_range()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjPubh and }(hjPhhhNhNubh)}(h:c:func:`!unmap_mapping_pages`h]hunmap_mapping_pages()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjPubh6. The VMA need only be kept stable for this operation.}(hjPhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM7hjLubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hX**Freeing** page tables - When finally the kernel removes page tables from a userland process (typically via :c:func:`!free_pgtables`) extreme care must be taken to ensure this is done safely, as this logic finally frees all page tables in the specified range, ignoring existing leaf entries (it assumes the caller has both zapped the range and prevented any further faults or modifications within it). h]h)}(hX**Freeing** page tables - When finally the kernel removes page tables from a userland process (typically via :c:func:`!free_pgtables`) extreme care must be taken to ensure this is done safely, as this logic finally frees all page tables in the specified range, ignoring existing leaf entries (it assumes the caller has both zapped the range and prevented any further faults or modifications within it).h](j)}(h **Freeing**h]hFreeing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhb page tables - When finally the kernel removes page tables from a userland process (typically via }(hjhhhNhNubh)}(h:c:func:`!free_pgtables`h]hfree_pgtables()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubhX ) extreme care must be taken to ensure this is done safely, as this logic finally frees all page tables in the specified range, ignoring existing leaf entries (it assumes the caller has both zapped the range and prevented any further faults or modifications within it).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM>hjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]enumtypearabicprefixhsuffix.uh1jhjhhhhhM/ubjy)}(hModifying mappings for reclaim or migration is performed under rmap lock as it, like zapping, does not fundamentally modify the identity of what is being mapped.h]h)}(hModifying mappings for reclaim or migration is performed under rmap lock as it, like zapping, does not fundamentally modify the identity of what is being mapped.h]hModifying mappings for reclaim or migration is performed under rmap lock as it, like zapping, does not fundamentally modify the identity of what is being mapped.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhjubah}(h]h ]h"]h$]h&]uh1jxhjhhhhhNubh)}(h**Traversing** and **zapping** ranges can be performed holding any one of the locks described in the terminology section above - that is the mmap lock, the VMA lock or either of the reverse mapping locks.h](j)}(h**Traversing**h]h Traversing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h **zapping**h]hzapping}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh ranges can be performed holding any one of the locks described in the terminology section above - that is the mmap lock, the VMA lock or either of the reverse mapping locks.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMIhjhhubh)}(hX4That is - as long as you keep the relevant VMA **stable** - you are good to go ahead and perform these operations on page tables (though internally, kernel operations that perform writes also acquire internal page table locks to serialise - see the page table implementation detail section for more details).h](h/That is - as long as you keep the relevant VMA }(hjHhhhNhNubj)}(h **stable**h]hstable}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubh - you are good to go ahead and perform these operations on page tables (though internally, kernel operations that perform writes also acquire internal page table locks to serialise - see the page table implementation detail section for more details).}(hjHhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMMhjhhubh)}(hWhen **installing** page table entries, the mmap or VMA lock must be held to keep the VMA stable. We explore why this is in the page table locking details section below.h](hWhen }(hjhhhhNhNubj)}(h**installing**h]h installing}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubh page table entries, the mmap or VMA lock must be held to keep the VMA stable. We explore why this is in the page table locking details section below.}(hjhhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMRhjhhubjr )}(hPage tables are normally only traversed in regions covered by VMAs. If you want to traverse page tables in areas that might not be covered by VMAs, heavier locking is required. See :c:func:`!walk_page_range_novma` for details.h]h)}(hPage tables are normally only traversed in regions covered by VMAs. If you want to traverse page tables in areas that might not be covered by VMAs, heavier locking is required. See :c:func:`!walk_page_range_novma` for details.h](hPage tables are normally only traversed in regions covered by VMAs. If you want to traverse page tables in areas that might not be covered by VMAs, heavier locking is required. See }(hjhhhNhNubh)}(h :c:func:`!walk_page_range_novma`h]hwalk_page_range_novma()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh for details.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMVhjubah}(h]h ]h"]h$]h&]uh1jq hjhhhhhNubh)}(h**Freeing** page tables is an entirely internal memory management operation and has special requirements (see the page freeing section below for more details).h](j)}(h **Freeing**h]hFreeing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh page tables is an entirely internal memory management operation and has special requirements (see the page freeing section below for more details).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM[hjhhubjr )}(hXBWhen **freeing** page tables, it must not be possible for VMAs containing the ranges those page tables map to be accessible via the reverse mapping. The :c:func:`!free_pgtables` function removes the relevant VMAs from the reverse mappings, but no other VMAs can be permitted to be accessible and span the specified range.h](h)}(hWhen **freeing** page tables, it must not be possible for VMAs containing the ranges those page tables map to be accessible via the reverse mapping.h](hWhen }(hjhhhNhNubj)}(h **freeing**h]hfreeing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh page tables, it must not be possible for VMAs containing the ranges those page tables map to be accessible via the reverse mapping.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM^hjubh)}(hThe :c:func:`!free_pgtables` function removes the relevant VMAs from the reverse mappings, but no other VMAs can be permitted to be accessible and span the specified range.h](hThe }(hjhhhNhNubh)}(h:c:func:`!free_pgtables`h]hfree_pgtables()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh function removes the relevant VMAs from the reverse mappings, but no other VMAs can be permitted to be accessible and span the specified range.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMbhjubeh}(h]h ]h"]h$]h&]uh1jq hjhhhhhNubeh}(h] page-tablesah ]h"] page tablesah$]h&]uh1hhjhhhhhMubh)}(hhh](h)}(h Lock orderingh]h Lock ordering}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj"hhhhhMgubh)}(hAs we have multiple locks across the kernel which may or may not be taken at the same time as explicit mm or VMA locks, we have to be wary of lock inversion, and the **order** in which locks are acquired and released becomes very important.h](hAs we have multiple locks across the kernel which may or may not be taken at the same time as explicit mm or VMA locks, we have to be wary of lock inversion, and the }(hj3hhhNhNubj)}(h **order**h]horder}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubhA in which locks are acquired and released becomes very important.}(hj3hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMihj"hhubjy)}(hXLock inversion occurs when two threads need to acquire multiple locks, but in doing so inadvertently cause a mutual deadlock. For example, consider thread 1 which holds lock A and tries to acquire lock B, while thread 2 holds lock B and tries to acquire lock A. Both threads are now deadlocked on each other. However, had they attempted to acquire locks in the same order, one would have waited for the other to complete its work and no deadlock would have occurred.h](h)}(h}Lock inversion occurs when two threads need to acquire multiple locks, but in doing so inadvertently cause a mutual deadlock.h]h}Lock inversion occurs when two threads need to acquire multiple locks, but in doing so inadvertently cause a mutual deadlock.}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMmhjSubh)}(hFor example, consider thread 1 which holds lock A and tries to acquire lock B, while thread 2 holds lock B and tries to acquire lock A.h]hFor example, consider thread 1 which holds lock A and tries to acquire lock B, while thread 2 holds lock B and tries to acquire lock A.}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMphjSubh)}(hBoth threads are now deadlocked on each other. However, had they attempted to acquire locks in the same order, one would have waited for the other to complete its work and no deadlock would have occurred.h]hBoth threads are now deadlocked on each other. However, had they attempted to acquire locks in the same order, one would have waited for the other to complete its work and no deadlock would have occurred.}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMshjSubeh}(h]h ]h"]h$]h&]uh1jxhj"hhhhhNubh)}(h~The opening comment in :c:macro:`!mm/rmap.c` describes in detail the required ordering of locks within memory management code:h](hThe opening comment in }(hjhhhNhNubh)}(h:c:macro:`!mm/rmap.c`h]h mm/rmap.c}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubhR describes in detail the required ordering of locks within memory management code:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMwhj"hhubh literal_block)}(hXpinode->i_rwsem (while writing or truncating, not reading or faulting) mm->mmap_lock mapping->invalidate_lock (in filemap_fault) folio_lock hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share, see hugetlbfs below) vma_start_write mapping->i_mmap_rwsem anon_vma->rwsem mm->page_table_lock or pte_lock swap_lock (in swap_duplicate, swap_info_get) mmlist_lock (in mmput, drain_mmlist and others) mapping->private_lock (in block_dirty_folio) i_pages lock (widely used) lruvec->lru_lock (in folio_lruvec_lock_irq) inode->i_lock (in set_page_dirty's __mark_inode_dirty) bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) sb_lock (within inode_lock in fs/fs-writeback.c) i_pages lock (widely used, in set_page_dirty, in arch-dependent flush_dcache_mmap_lock, within bdi.wb->list_lock in __sync_single_inode)h]hXpinode->i_rwsem (while writing or truncating, not reading or faulting) mm->mmap_lock mapping->invalidate_lock (in filemap_fault) folio_lock hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share, see hugetlbfs below) vma_start_write mapping->i_mmap_rwsem anon_vma->rwsem mm->page_table_lock or pte_lock swap_lock (in swap_duplicate, swap_info_get) mmlist_lock (in mmput, drain_mmlist and others) mapping->private_lock (in block_dirty_folio) i_pages lock (widely used) lruvec->lru_lock (in folio_lruvec_lock_irq) inode->i_lock (in set_page_dirty's __mark_inode_dirty) bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) sb_lock (within inode_lock in fs/fs-writeback.c) i_pages lock (widely used, in set_page_dirty, in arch-dependent flush_dcache_mmap_lock, within bdi.wb->list_lock in __sync_single_inode)}hjsbah}(h]h ]h"]h$]h&]hhforcelanguagenonehighlight_args}uh1jhhhMzhj"hhubh)}(hjThere is also a file-system specific lock ordering comment located at the top of :c:macro:`!mm/filemap.c`:h](hQThere is also a file-system specific lock ordering comment located at the top of }(hjhhhNhNubh)}(h:c:macro:`!mm/filemap.c`h]h mm/filemap.c}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj"hhubj)}(hX->i_mmap_rwsem (truncate_pagecache) ->private_lock (__free_pte->block_dirty_folio) ->swap_lock (exclusive_swap_page, others) ->i_pages lock ->i_rwsem ->invalidate_lock (acquired by fs in truncate path) ->i_mmap_rwsem (truncate->unmap_mapping_range) ->mmap_lock ->i_mmap_rwsem ->page_table_lock or pte_lock (various, mainly in memory.c) ->i_pages lock (arch-dependent flush_dcache_mmap_lock) ->mmap_lock ->invalidate_lock (filemap_fault) ->lock_page (filemap_fault, access_process_vm) ->i_rwsem (generic_perform_write) ->mmap_lock (fault_in_readable->do_page_fault) bdi->wb.list_lock sb_lock (fs/fs-writeback.c) ->i_pages lock (__sync_single_inode) ->i_mmap_rwsem ->anon_vma.lock (vma_merge) ->anon_vma.lock ->page_table_lock or pte_lock (anon_vma_prepare and various) ->page_table_lock or pte_lock ->swap_lock (try_to_unmap_one) ->private_lock (try_to_unmap_one) ->i_pages lock (try_to_unmap_one) ->lruvec->lru_lock (follow_page_mask->mark_page_accessed) ->lruvec->lru_lock (check_pte_range->folio_isolate_lru) ->private_lock (folio_remove_rmap_pte->set_page_dirty) ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty) ->inode->i_lock (folio_remove_rmap_pte->set_page_dirty) bdi.wb->list_lock (zap_pte_range->set_page_dirty) ->inode->i_lock (zap_pte_range->set_page_dirty) ->private_lock (zap_pte_range->block_dirty_folio)h]hX->i_mmap_rwsem (truncate_pagecache) ->private_lock (__free_pte->block_dirty_folio) ->swap_lock (exclusive_swap_page, others) ->i_pages lock ->i_rwsem ->invalidate_lock (acquired by fs in truncate path) ->i_mmap_rwsem (truncate->unmap_mapping_range) ->mmap_lock ->i_mmap_rwsem ->page_table_lock or pte_lock (various, mainly in memory.c) ->i_pages lock (arch-dependent flush_dcache_mmap_lock) ->mmap_lock ->invalidate_lock (filemap_fault) ->lock_page (filemap_fault, access_process_vm) ->i_rwsem (generic_perform_write) ->mmap_lock (fault_in_readable->do_page_fault) bdi->wb.list_lock sb_lock (fs/fs-writeback.c) ->i_pages lock (__sync_single_inode) ->i_mmap_rwsem ->anon_vma.lock (vma_merge) ->anon_vma.lock ->page_table_lock or pte_lock (anon_vma_prepare and various) ->page_table_lock or pte_lock ->swap_lock (try_to_unmap_one) ->private_lock (try_to_unmap_one) ->i_pages lock (try_to_unmap_one) ->lruvec->lru_lock (follow_page_mask->mark_page_accessed) ->lruvec->lru_lock (check_pte_range->folio_isolate_lru) ->private_lock (folio_remove_rmap_pte->set_page_dirty) ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty) ->inode->i_lock (folio_remove_rmap_pte->set_page_dirty) bdi.wb->list_lock (zap_pte_range->set_page_dirty) ->inode->i_lock (zap_pte_range->set_page_dirty) ->private_lock (zap_pte_range->block_dirty_folio)}hjsbah}(h]h ]h"]h$]h&]hhjjjj}uh1jhhhMhj"hhubh)}(hsPlease check the current state of these comments which may have changed since the time of writing of this document.h]hsPlease check the current state of these comments which may have changed since the time of writing of this document.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj"hhubeh}(h] lock-orderingah ]h"] lock orderingah$]h&]uh1hhjhhhhhMgubeh}(h]lockingah ]h"]lockingah$]h&]uh1hhhhhhhhK!ubh)}(hhh](h)}(hLocking Implementation Detailsh]hLocking Implementation Details}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubjr )}(hnLocking rules for PTE-level page tables are very different from locking rules for page tables at other levels.h]h)}(hnLocking rules for PTE-level page tables are very different from locking rules for page tables at other levels.h]hnLocking rules for PTE-level page tables are very different from locking rules for page tables at other levels.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jq hj hhhhhNubh)}(hhh](h)}(hPage table locking detailsh]hPage table locking details}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hhhhhMubh)}(hwIn addition to the locks described in the terminology section above, we have additional locks dedicated to page tables:h]hwIn addition to the locks described in the terminology section above, we have additional locks dedicated to page tables:}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj4hhubj)}(hhh](j)}(h**Higher level page table locks** - Higher level page tables, that is PGD, P4D and PUD each make use of the process address space granularity :c:member:`!mm->page_table_lock` lock when modified. h]h)}(h**Higher level page table locks** - Higher level page tables, that is PGD, P4D and PUD each make use of the process address space granularity :c:member:`!mm->page_table_lock` lock when modified.h](j)}(h!**Higher level page table locks**h]hHigher level page table locks}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZubhm - Higher level page tables, that is PGD, P4D and PUD each make use of the process address space granularity }(hjZhhhNhNubh)}(h :c:member:`!mm->page_table_lock`h]hmm->page_table_lock}(hjphhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhjZubh lock when modified.}(hjZhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjVubah}(h]h ]h"]h$]h&]uh1jhjShhhhhNubj)}(hX**Fine-grained page table locks** - PMDs and PTEs each have fine-grained locks either kept within the folios describing the page tables or allocated separated and pointed at by the folios if :c:macro:`!ALLOC_SPLIT_PTLOCKS` is set. The PMD spin lock is obtained via :c:func:`!pmd_lock`, however PTEs are mapped into higher memory (if a 32-bit system) and carefully locked via :c:func:`!pte_offset_map_lock`. h]h)}(hX**Fine-grained page table locks** - PMDs and PTEs each have fine-grained locks either kept within the folios describing the page tables or allocated separated and pointed at by the folios if :c:macro:`!ALLOC_SPLIT_PTLOCKS` is set. The PMD spin lock is obtained via :c:func:`!pmd_lock`, however PTEs are mapped into higher memory (if a 32-bit system) and carefully locked via :c:func:`!pte_offset_map_lock`.h](j)}(h!**Fine-grained page table locks**h]hFine-grained page table locks}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh - PMDs and PTEs each have fine-grained locks either kept within the folios describing the page tables or allocated separated and pointed at by the folios if }(hjhhhNhNubh)}(h:c:macro:`!ALLOC_SPLIT_PTLOCKS`h]hALLOC_SPLIT_PTLOCKS}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubh+ is set. The PMD spin lock is obtained via }(hjhhhNhNubh)}(h:c:func:`!pmd_lock`h]h pmd_lock()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh[, however PTEs are mapped into higher memory (if a 32-bit system) and carefully locked via }(hjhhhNhNubh)}(h:c:func:`!pte_offset_map_lock`h]hpte_offset_map_lock()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjShhhhhNubeh}(h]h ]h"]h$]h&]jjuh1jhhhMhj4hhubh)}(hvThese locks represent the minimum required to interact with each page table level, but there are further requirements.h]hvThese locks represent the minimum required to interact with each page table level, but there are further requirements.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj4hhubh)}(hImportantly, note that on a **traversal** of page tables, sometimes no such locks are taken. However, at the PTE level, at least concurrent page table deletion must be prevented (using RCU) and the page table must be mapped into high memory, see below.h](hImportantly, note that on a }(hjhhhNhNubj)}(h **traversal**h]h traversal}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh of page tables, sometimes no such locks are taken. However, at the PTE level, at least concurrent page table deletion must be prevented (using RCU) and the page table must be mapped into high memory, see below.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj4hhubh)}(hxWhether care is taken on reading the page table entries depends on the architecture, see the section on atomicity below.h]hxWhether care is taken on reading the page table entries depends on the architecture, see the section on atomicity below.}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj4hhubh)}(hhh](h)}(h Locking rulesh]h Locking rules}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0hhhhhMubh)}(hCWe establish basic locking rules when interacting with page tables:h]hCWe establish basic locking rules when interacting with page tables:}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubj)}(hhh](j)}(hWhen changing a page table entry the page table lock for that page table **must** be held, except if you can safely assume nobody can access the page tables concurrently (such as on invocation of :c:func:`!free_pgtables`).h]h)}(hWhen changing a page table entry the page table lock for that page table **must** be held, except if you can safely assume nobody can access the page tables concurrently (such as on invocation of :c:func:`!free_pgtables`).h](hIWhen changing a page table entry the page table lock for that page table }(hjVhhhNhNubj)}(h**must**h]hmust}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubhs be held, except if you can safely assume nobody can access the page tables concurrently (such as on invocation of }(hjVhhhNhNubh)}(h:c:func:`!free_pgtables`h]hfree_pgtables()}(hjphhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjVubh).}(hjVhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjRubah}(h]h ]h"]h$]h&]uh1jhjOhhhhhNubj)}(h{Reads from and writes to page table entries must be *appropriately* atomic. See the section on atomicity below for details.h]h)}(h{Reads from and writes to page table entries must be *appropriately* atomic. See the section on atomicity below for details.h](h4Reads from and writes to page table entries must be }(hjhhhNhNubj[)}(h*appropriately*h]h appropriately}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jZhjubh8 atomic. See the section on atomicity below for details.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjOhhhhhNubj)}(hPopulating previously empty entries requires that the mmap or VMA locks are held (read or write), doing so with only rmap locks would be dangerous (see the warning below).h]h)}(hPopulating previously empty entries requires that the mmap or VMA locks are held (read or write), doing so with only rmap locks would be dangerous (see the warning below).h]hPopulating previously empty entries requires that the mmap or VMA locks are held (read or write), doing so with only rmap locks would be dangerous (see the warning below).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjOhhhhhNubj)}(hAs mentioned previously, zapping can be performed while simply keeping the VMA stable, that is holding any one of the mmap, VMA or rmap locks. h]h)}(hAs mentioned previously, zapping can be performed while simply keeping the VMA stable, that is holding any one of the mmap, VMA or rmap locks.h]hAs mentioned previously, zapping can be performed while simply keeping the VMA stable, that is holding any one of the mmap, VMA or rmap locks.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjOhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1jhhhMhj0hhubjr )}(hXPopulating previously empty entries is dangerous as, when unmapping VMAs, :c:func:`!vms_clear_ptes` has a window of time between zapping (via :c:func:`!unmap_vmas`) and freeing page tables (via :c:func:`!free_pgtables`), where the VMA is still visible in the rmap tree. :c:func:`!free_pgtables` assumes that the zap has already been performed and removes PTEs unconditionally (along with all other page tables in the freed range), so installing new PTE entries could leak memory and also cause other unexpected and dangerous behaviour.h]h)}(hXPopulating previously empty entries is dangerous as, when unmapping VMAs, :c:func:`!vms_clear_ptes` has a window of time between zapping (via :c:func:`!unmap_vmas`) and freeing page tables (via :c:func:`!free_pgtables`), where the VMA is still visible in the rmap tree. :c:func:`!free_pgtables` assumes that the zap has already been performed and removes PTEs unconditionally (along with all other page tables in the freed range), so installing new PTE entries could leak memory and also cause other unexpected and dangerous behaviour.h](hJPopulating previously empty entries is dangerous as, when unmapping VMAs, }(hjhhhNhNubh)}(h:c:func:`!vms_clear_ptes`h]hvms_clear_ptes()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh+ has a window of time between zapping (via }(hjhhhNhNubh)}(h:c:func:`!unmap_vmas`h]h unmap_vmas()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh) and freeing page tables (via }(hjhhhNhNubh)}(h:c:func:`!free_pgtables`h]hfree_pgtables()}(hj!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh4), where the VMA is still visible in the rmap tree. }(hjhhhNhNubh)}(h:c:func:`!free_pgtables`h]hfree_pgtables()}(hj4hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh assumes that the zap has already been performed and removes PTEs unconditionally (along with all other page tables in the freed range), so installing new PTE entries could leak memory and also cause other unexpected and dangerous behaviour.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jq hj0hhhhhNubh)}(hsThere are additional rules applicable when moving page tables, which we discuss in the section on this topic below.h]hsThere are additional rules applicable when moving page tables, which we discuss in the section on this topic below.}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubh)}(hzPTE-level page tables are different from page tables at other levels, and there are extra requirements for accessing them:h]hzPTE-level page tables are different from page tables at other levels, and there are extra requirements for accessing them:}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubj)}(hhh](j)}(hyOn 32-bit architectures, they may be in high memory (meaning they need to be mapped into kernel memory to be accessible).h]h)}(hyOn 32-bit architectures, they may be in high memory (meaning they need to be mapped into kernel memory to be accessible).h]hyOn 32-bit architectures, they may be in high memory (meaning they need to be mapped into kernel memory to be accessible).}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjrubah}(h]h ]h"]h$]h&]uh1jhjohhhhhNubj)}(hXWhen empty, they can be unlinked and RCU-freed while holding an mmap lock or rmap lock for reading in combination with the PTE and PMD page table locks. In particular, this happens in :c:func:`!retract_page_tables` when handling :c:macro:`!MADV_COLLAPSE`. So accessing PTE-level page tables requires at least holding an RCU read lock; but that only suffices for readers that can tolerate racing with concurrent page table updates such that an empty PTE is observed (in a page table that has actually already been detached and marked for RCU freeing) while another new page table has been installed in the same location and filled with entries. Writers normally need to take the PTE lock and revalidate that the PMD entry still refers to the same PTE-level page table. If the writer does not care whether it is the same PTE-level page table, it can take the PMD lock and revalidate that the contents of pmd entry still meet the requirements. In particular, this also happens in :c:func:`!retract_page_tables` when handling :c:macro:`!MADV_COLLAPSE`. h]h)}(hXWhen empty, they can be unlinked and RCU-freed while holding an mmap lock or rmap lock for reading in combination with the PTE and PMD page table locks. In particular, this happens in :c:func:`!retract_page_tables` when handling :c:macro:`!MADV_COLLAPSE`. So accessing PTE-level page tables requires at least holding an RCU read lock; but that only suffices for readers that can tolerate racing with concurrent page table updates such that an empty PTE is observed (in a page table that has actually already been detached and marked for RCU freeing) while another new page table has been installed in the same location and filled with entries. Writers normally need to take the PTE lock and revalidate that the PMD entry still refers to the same PTE-level page table. If the writer does not care whether it is the same PTE-level page table, it can take the PMD lock and revalidate that the contents of pmd entry still meet the requirements. In particular, this also happens in :c:func:`!retract_page_tables` when handling :c:macro:`!MADV_COLLAPSE`.h](hWhen empty, they can be unlinked and RCU-freed while holding an mmap lock or rmap lock for reading in combination with the PTE and PMD page table locks. In particular, this happens in }(hjhhhNhNubh)}(h:c:func:`!retract_page_tables`h]hretract_page_tables()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh when handling }(hjhhhNhNubh)}(h:c:macro:`!MADV_COLLAPSE`h]h MADV_COLLAPSE}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubhX. So accessing PTE-level page tables requires at least holding an RCU read lock; but that only suffices for readers that can tolerate racing with concurrent page table updates such that an empty PTE is observed (in a page table that has actually already been detached and marked for RCU freeing) while another new page table has been installed in the same location and filled with entries. Writers normally need to take the PTE lock and revalidate that the PMD entry still refers to the same PTE-level page table. If the writer does not care whether it is the same PTE-level page table, it can take the PMD lock and revalidate that the contents of pmd entry still meet the requirements. In particular, this also happens in }(hjhhhNhNubh)}(h:c:func:`!retract_page_tables`h]hretract_page_tables()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh when handling }(hjhhhNhNubh)}(h:c:macro:`!MADV_COLLAPSE`h]h MADV_COLLAPSE}(hjhhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM hjubah}(h]h ]h"]h$]h&]uh1jhjohhhhhNubeh}(h]h ]h"]h$]h&]jjuh1jhhhM hj0hhubh)}(hX`To access PTE-level page tables, a helper like :c:func:`!pte_offset_map_lock` or :c:func:`!pte_offset_map` can be used depending on stability requirements. These map the page table into kernel memory if required, take the RCU lock, and depending on variant, may also look up or acquire the PTE lock. See the comment on :c:func:`!__pte_offset_map_lock`.h](h/To access PTE-level page tables, a helper like }(hjhhhNhNubh)}(h:c:func:`!pte_offset_map_lock`h]hpte_offset_map_lock()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh or }(hjhhhNhNubh)}(h:c:func:`!pte_offset_map`h]hpte_offset_map()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh can be used depending on stability requirements. These map the page table into kernel memory if required, take the RCU lock, and depending on variant, may also look up or acquire the PTE lock. See the comment on }(hjhhhNhNubh)}(h :c:func:`!__pte_offset_map_lock`h]h__pte_offset_map_lock()}(hj"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubeh}(h] locking-rulesah ]h"] locking rulesah$]h&]uh1hhj4hhhhhMubh)}(hhh](h)}(h Atomicityh]h Atomicity}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjChhhhhM"ubh)}(hX`Regardless of page table locks, the MMU hardware concurrently updates accessed and dirty bits (perhaps more, depending on architecture). Additionally, page table traversal operations in parallel (though holding the VMA stable) and functionality like GUP-fast locklessly traverses (that is reads) page tables, without even keeping the VMA stable at all.h]hX`Regardless of page table locks, the MMU hardware concurrently updates accessed and dirty bits (perhaps more, depending on architecture). Additionally, page table traversal operations in parallel (though holding the VMA stable) and functionality like GUP-fast locklessly traverses (that is reads) page tables, without even keeping the VMA stable at all.}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM$hjChhubh)}(hWhen performing a page table traversal and keeping the VMA stable, whether a read must be performed once and only once or not depends on the architecture (for instance x86-64 does not require any special precautions).h]hWhen performing a page table traversal and keeping the VMA stable, whether a read must be performed once and only once or not depends on the architecture (for instance x86-64 does not require any special precautions).}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM*hjChhubh)}(hXaIf a write is being performed, or if a read informs whether a write takes place (on an installation of a page table entry say, for instance in :c:func:`!__pud_install`), special care must always be taken. In these cases we can never assume that page table locks give us entirely exclusive access, and must retrieve page table entries once and only once.h](hIf a write is being performed, or if a read informs whether a write takes place (on an installation of a page table entry say, for instance in }(hjphhhNhNubh)}(h:c:func:`!__pud_install`h]h__pud_install()}(hjxhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjpubh), special care must always be taken. In these cases we can never assume that page table locks give us entirely exclusive access, and must retrieve page table entries once and only once.}(hjphhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM.hjChhubh)}(hXIf we are reading page table entries, then we need only ensure that the compiler does not rearrange our loads. This is achieved via :c:func:`!pXXp_get` functions - :c:func:`!pgdp_get`, :c:func:`!p4dp_get`, :c:func:`!pudp_get`, :c:func:`!pmdp_get`, and :c:func:`!ptep_get`.h](hIf we are reading page table entries, then we need only ensure that the compiler does not rearrange our loads. This is achieved via }(hjhhhNhNubh)}(h:c:func:`!pXXp_get`h]h pXXp_get()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh functions - }(hjhhhNhNubh)}(h:c:func:`!pgdp_get`h]h pgdp_get()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh, }(hjhhhNhNubh)}(h:c:func:`!p4dp_get`h]h p4dp_get()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh, }hjsbh)}(h:c:func:`!pudp_get`h]h pudp_get()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh, }(hjhhhNhNubh)}(h:c:func:`!pmdp_get`h]h pmdp_get()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh, and }(hjhhhNhNubh)}(h:c:func:`!ptep_get`h]h ptep_get()}(hjhhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM4hjChhubh)}(hlEach of these uses :c:func:`!READ_ONCE` to guarantee that the compiler reads the page table entry only once.h](hEach of these uses }(hj hhhNhNubh)}(h:c:func:`!READ_ONCE`h]h READ_ONCE()}(hj hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubhE to guarantee that the compiler reads the page table entry only once.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM9hjChhubh)}(hHowever, if we wish to manipulate an existing page table entry and care about the previously stored data, we must go further and use an hardware atomic operation as, for example, in :c:func:`!ptep_get_and_clear`.h](hHowever, if we wish to manipulate an existing page table entry and care about the previously stored data, we must go further and use an hardware atomic operation as, for example, in }(hj2 hhhNhNubh)}(h:c:func:`!ptep_get_and_clear`h]hptep_get_and_clear()}(hj: hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj2 ubh.}(hj2 hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM<hjChhubh)}(hXYEqually, operations that do not rely on the VMA being held stable, such as GUP-fast (see :c:func:`!gup_fast` and its various page table level handlers like :c:func:`!gup_fast_pte_range`), must very carefully interact with page table entries, using functions such as :c:func:`!ptep_get_lockless` and equivalent for higher level page table levels.h](hYEqually, operations that do not rely on the VMA being held stable, such as GUP-fast (see }(hjS hhhNhNubh)}(h:c:func:`!gup_fast`h]h gup_fast()}(hj[ hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjS ubh0 and its various page table level handlers like }(hjS hhhNhNubh)}(h:c:func:`!gup_fast_pte_range`h]hgup_fast_pte_range()}(hjn hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjS ubhQ), must very carefully interact with page table entries, using functions such as }(hjS hhhNhNubh)}(h:c:func:`!ptep_get_lockless`h]hptep_get_lockless()}(hj hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjS ubh3 and equivalent for higher level page table levels.}(hjS hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM@hjChhubh)}(hWrites to page table entries must also be appropriately atomic, as established by :c:func:`!set_pXX` functions - :c:func:`!set_pgd`, :c:func:`!set_p4d`, :c:func:`!set_pud`, :c:func:`!set_pmd`, and :c:func:`!set_pte`.h](hRWrites to page table entries must also be appropriately atomic, as established by }(hj hhhNhNubh)}(h:c:func:`!set_pXX`h]h set_pXX()}(hj hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubh functions - }(hj hhhNhNubh)}(h:c:func:`!set_pgd`h]h set_pgd()}(hj hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubh, }(hj hhhNhNubh)}(h:c:func:`!set_p4d`h]h set_p4d()}(hj hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubh, }(hj hhhNhNubh)}(h:c:func:`!set_pud`h]h set_pud()}(hj hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubh, }hj sbh)}(h:c:func:`!set_pmd`h]h set_pmd()}(hj hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubh, and }(hj hhhNhNubh)}(h:c:func:`!set_pte`h]h set_pte()}(hj!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj ubh.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMFhjChhubh)}(hEqually functions which clear page table entries must be appropriately atomic, as in :c:func:`!pXX_clear` functions - :c:func:`!pgd_clear`, :c:func:`!p4d_clear`, :c:func:`!pud_clear`, :c:func:`!pmd_clear`, and :c:func:`!pte_clear`.h](hUEqually functions which clear page table entries must be appropriately atomic, as in }(hj!hhhNhNubh)}(h:c:func:`!pXX_clear`h]h pXX_clear()}(hj"!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh functions - }(hj!hhhNhNubh)}(h:c:func:`!pgd_clear`h]h pgd_clear()}(hj5!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh, }(hj!hhhNhNubh)}(h:c:func:`!p4d_clear`h]h p4d_clear()}(hjH!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh, }(hj!hhhNhNubh)}(h:c:func:`!pud_clear`h]h pud_clear()}(hj[!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh, }hj!sbh)}(h:c:func:`!pmd_clear`h]h pmd_clear()}(hjn!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh, and }(hj!hhhNhNubh)}(h:c:func:`!pte_clear`h]h pte_clear()}(hj!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh.}(hj!hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMJhjChhubeh}(h] atomicityah ]h"] atomicityah$]h&]uh1hhj4hhhhhM"ubh)}(hhh](h)}(hPage table installationh]hPage table installation}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!hhhhhMPubh)}(hPage table installation is performed with the VMA held stable explicitly by an mmap or VMA lock in read or write mode (see the warning in the locking rules section for details as to why).h]hPage table installation is performed with the VMA held stable explicitly by an mmap or VMA lock in read or write mode (see the warning in the locking rules section for details as to why).}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMRhj!hhubh)}(hWhen allocating a P4D, PUD or PMD and setting the relevant entry in the above PGD, P4D or PUD, the :c:member:`!mm->page_table_lock` must be held. This is acquired in :c:func:`!__p4d_alloc`, :c:func:`!__pud_alloc` and :c:func:`!__pmd_alloc` respectively.h](hcWhen allocating a P4D, PUD or PMD and setting the relevant entry in the above PGD, P4D or PUD, the }(hj!hhhNhNubh)}(h :c:member:`!mm->page_table_lock`h]hmm->page_table_lock}(hj!hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj!ubh# must be held. This is acquired in }(hj!hhhNhNubh)}(h:c:func:`!__p4d_alloc`h]h __p4d_alloc()}(hj!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh, }(hj!hhhNhNubh)}(h:c:func:`!__pud_alloc`h]h __pud_alloc()}(hj!hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh and }(hj!hhhNhNubh)}(h:c:func:`!__pmd_alloc`h]h __pmd_alloc()}(hj"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj!ubh respectively.}(hj!hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMVhj!hhubjy)}(h:c:func:`!__pmd_alloc` actually invokes :c:func:`!pud_lock` and :c:func:`!pud_lockptr` in turn, however at the time of writing it ultimately references the :c:member:`!mm->page_table_lock`.h]h)}(h:c:func:`!__pmd_alloc` actually invokes :c:func:`!pud_lock` and :c:func:`!pud_lockptr` in turn, however at the time of writing it ultimately references the :c:member:`!mm->page_table_lock`.h](h)}(h:c:func:`!__pmd_alloc`h]h __pmd_alloc()}(hj#"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj"ubh actually invokes }(hj"hhhNhNubh)}(h:c:func:`!pud_lock`h]h pud_lock()}(hj6"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj"ubh and }(hj"hhhNhNubh)}(h:c:func:`!pud_lockptr`h]h pud_lockptr()}(hjI"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj"ubhF in turn, however at the time of writing it ultimately references the }(hj"hhhNhNubh)}(h :c:member:`!mm->page_table_lock`h]hmm->page_table_lock}(hj\"hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj"ubh.}(hj"hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM[hj"ubah}(h]h ]h"]h$]h&]uh1jxhj!hhhhhNubh)}(hXBAllocating a PTE will either use the :c:member:`!mm->page_table_lock` or, if :c:macro:`!USE_SPLIT_PMD_PTLOCKS` is defined, a lock embedded in the PMD physical page metadata in the form of a :c:struct:`!struct ptdesc`, acquired by :c:func:`!pmd_ptdesc` called from :c:func:`!pmd_lock` and ultimately :c:func:`!__pte_alloc`.h](h%Allocating a PTE will either use the }(hj{"hhhNhNubh)}(h :c:member:`!mm->page_table_lock`h]hmm->page_table_lock}(hj"hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj{"ubh or, if }(hj{"hhhNhNubh)}(h!:c:macro:`!USE_SPLIT_PMD_PTLOCKS`h]hUSE_SPLIT_PMD_PTLOCKS}(hj"hhhNhNubah}(h]h ](jjc-macroeh"]h$]h&]uh1hhj{"ubhP is defined, a lock embedded in the PMD physical page metadata in the form of a }(hj{"hhhNhNubh)}(h:c:struct:`!struct ptdesc`h]h struct ptdesc}(hj"hhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhj{"ubh, acquired by }(hj{"hhhNhNubh)}(h:c:func:`!pmd_ptdesc`h]h pmd_ptdesc()}(hj"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj{"ubh called from }(hj{"hhhNhNubh)}(h:c:func:`!pmd_lock`h]h pmd_lock()}(hj"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj{"ubh and ultimately }(hj{"hhhNhNubh)}(h:c:func:`!__pte_alloc`h]h __pte_alloc()}(hj"hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj{"ubh.}(hj{"hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM_hj!hhubh)}(hFinally, modifying the contents of the PTE requires special treatment, as the PTE page table lock must be acquired whenever we want stable and exclusive access to entries contained within a PTE, especially when we wish to modify them.h]hFinally, modifying the contents of the PTE requires special treatment, as the PTE page table lock must be acquired whenever we want stable and exclusive access to entries contained within a PTE, especially when we wish to modify them.}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMehj!hhubh)}(hXfThis is performed via :c:func:`!pte_offset_map_lock` which carefully checks to ensure that the PTE hasn't changed from under us, ultimately invoking :c:func:`!pte_lockptr` to obtain a spin lock at PTE granularity contained within the :c:struct:`!struct ptdesc` associated with the physical PTE page. The lock must be released via :c:func:`!pte_unmap_unlock`. h](hThis is performed via }(hj #hhhNhNubh)}(h:c:func:`!pte_offset_map_lock`h]hpte_offset_map_lock()}(hj#hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj #ubhc which carefully checks to ensure that the PTE hasn’t changed from under us, ultimately invoking }(hj #hhhNhNubh)}(h:c:func:`!pte_lockptr`h]h pte_lockptr()}(hj$#hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj #ubh? to obtain a spin lock at PTE granularity contained within the }(hj #hhhNhNubh)}(h:c:struct:`!struct ptdesc`h]h struct ptdesc}(hj7#hhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhj #ubhF associated with the physical PTE page. The lock must be released via }(hj #hhhNhNubh)}(h:c:func:`!pte_unmap_unlock`h]hpte_unmap_unlock()}(hjJ#hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj #ubh.}(hj #hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMjhj!hhubjy)}(hThere are some variants on this, such as :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but for brevity we do not explore this. See the comment for :c:func:`!__pte_offset_map_lock` for more details.h]h)}(hThere are some variants on this, such as :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but for brevity we do not explore this. See the comment for :c:func:`!__pte_offset_map_lock` for more details.h](h)There are some variants on this, such as }(hjg#hhhNhNubh)}(h#:c:func:`!pte_offset_map_rw_nolock`h]hpte_offset_map_rw_nolock()}(hjo#hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjg#ubhb when we know we hold the PTE stable but for brevity we do not explore this. See the comment for }(hjg#hhhNhNubh)}(h :c:func:`!__pte_offset_map_lock`h]h__pte_offset_map_lock()}(hj#hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjg#ubh for more details.}(hjg#hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMphjc#ubah}(h]h ]h"]h$]h&]uh1jxhj!hhhhhNubh)}(hWhen modifying data in ranges we typically only wish to allocate higher page tables as necessary, using these locks to avoid races or overwriting anything, and set/clear data at the PTE level as required (for instance when page faulting or zapping).h]hWhen modifying data in ranges we typically only wish to allocate higher page tables as necessary, using these locks to avoid races or overwriting anything, and set/clear data at the PTE level as required (for instance when page faulting or zapping).}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMuhj!hhubh)}(hXA typical pattern taken when traversing page table entries to install a new mapping is to optimistically determine whether the page table entry in the table above is empty, if so, only then acquiring the page table lock and checking again to see if it was allocated underneath us.h]hXA typical pattern taken when traversing page table entries to install a new mapping is to optimistically determine whether the page table entry in the table above is empty, if so, only then acquiring the page table lock and checking again to see if it was allocated underneath us.}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMzhj!hhubh)}(hThis allows for a traversal with page table locks only being taken when required. An example of this is :c:func:`!__pud_alloc`.h](hhThis allows for a traversal with page table locks only being taken when required. An example of this is }(hj#hhhNhNubh)}(h:c:func:`!__pud_alloc`h]h __pud_alloc()}(hj#hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj#ubh.}(hj#hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj!hhubh)}(hAt the leaf page table, that is the PTE, we can't entirely rely on this pattern as we have separate PMD and PTE locks and a THP collapse for instance might have eliminated the PMD entry as well as the PTE from under us.h]hAt the leaf page table, that is the PTE, we can’t entirely rely on this pattern as we have separate PMD and PTE locks and a THP collapse for instance might have eliminated the PMD entry as well as the PTE from under us.}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj!hhubh)}(hThis is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD entry for the PTE, carefully checking it is as expected, before acquiring the PTE-specific lock, and then *again* checking that the PMD entry is as expected.h](h This is why }(hj#hhhNhNubh)}(h :c:func:`!__pte_offset_map_lock`h]h__pte_offset_map_lock()}(hj#hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj#ubh locklessly retrieves the PMD entry for the PTE, carefully checking it is as expected, before acquiring the PTE-specific lock, and then }(hj#hhhNhNubj[)}(h*again*h]hagain}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1jZhj#ubh, checking that the PMD entry is as expected.}(hj#hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj!hhubh)}(hIf a THP collapse (or similar) were to occur then the lock on both pages would be acquired, so we can ensure this is prevented while the PTE lock is held.h]hIf a THP collapse (or similar) were to occur then the lock on both pages would be acquired, so we can ensure this is prevented while the PTE lock is held.}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj!hhubh)}(h>Installing entries this way ensures mutual exclusion on write.h]h>Installing entries this way ensures mutual exclusion on write.}(hj-$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj!hhubeh}(h]page-table-installationah ]h"]page table installationah$]h&]uh1hhj4hhhhhMPubh)}(hhh](h)}(hPage table freeingh]hPage table freeing}(hjF$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjC$hhhhhMubh)}(hTearing down page tables themselves is something that requires significant care. There must be no way that page tables designated for removal can be traversed or referenced by concurrent tasks.h]hTearing down page tables themselves is something that requires significant care. There must be no way that page tables designated for removal can be traversed or referenced by concurrent tasks.}(hjT$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjC$hhubh)}(hIt is insufficient to simply hold an mmap write lock and VMA lock (which will prevent racing faults, and rmap operations), as a file-backed mapping can be truncated under the :c:struct:`!struct address_space->i_mmap_rwsem` alone.h](hIt is insufficient to simply hold an mmap write lock and VMA lock (which will prevent racing faults, and rmap operations), as a file-backed mapping can be truncated under the }(hjb$hhhNhNubh)}(h/:c:struct:`!struct address_space->i_mmap_rwsem`h]h"struct address_space->i_mmap_rwsem}(hjj$hhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhjb$ubh alone.}(hjb$hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjC$hhubh)}(hAs a result, no VMA which can be accessed via the reverse mapping (either through the :c:struct:`!struct anon_vma->rb_root` or the :c:member:`!struct address_space->i_mmap` interval trees) can have its page tables torn down.h](hVAs a result, no VMA which can be accessed via the reverse mapping (either through the }(hj$hhhNhNubh)}(h%:c:struct:`!struct anon_vma->rb_root`h]hstruct anon_vma->rb_root}(hj$hhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhj$ubh or the }(hj$hhhNhNubh)}(h):c:member:`!struct address_space->i_mmap`h]hstruct address_space->i_mmap}(hj$hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj$ubh4 interval trees) can have its page tables torn down.}(hj$hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjC$hhubh)}(hThe operation is typically performed via :c:func:`!free_pgtables`, which assumes either the mmap write lock has been taken (as specified by its :c:member:`!mm_wr_locked` parameter), or that the VMA is already unreachable.h](h)The operation is typically performed via }(hj$hhhNhNubh)}(h:c:func:`!free_pgtables`h]hfree_pgtables()}(hj$hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj$ubhO, which assumes either the mmap write lock has been taken (as specified by its }(hj$hhhNhNubh)}(h:c:member:`!mm_wr_locked`h]h mm_wr_locked}(hj$hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj$ubh4 parameter), or that the VMA is already unreachable.}(hj$hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjC$hhubh)}(hIt carefully removes the VMA from all reverse mappings, however it's important that no new ones overlap these or any route remain to permit access to addresses within the range whose page tables are being torn down.h]hIt carefully removes the VMA from all reverse mappings, however it’s important that no new ones overlap these or any route remain to permit access to addresses within the range whose page tables are being torn down.}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjC$hhubh)}(hAdditionally, it assumes that a zap has already been performed and steps have been taken to ensure that no further page table entries can be installed between the zap and the invocation of :c:func:`!free_pgtables`.h](hAdditionally, it assumes that a zap has already been performed and steps have been taken to ensure that no further page table entries can be installed between the zap and the invocation of }(hj$hhhNhNubh)}(h:c:func:`!free_pgtables`h]hfree_pgtables()}(hj%hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj$ubh.}(hj$hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjC$hhubh)}(hSince it is assumed that all such steps have been taken, page table entries are cleared without page table locks (in the :c:func:`!pgd_clear`, :c:func:`!p4d_clear`, :c:func:`!pud_clear`, and :c:func:`!pmd_clear` functions.h](hySince it is assumed that all such steps have been taken, page table entries are cleared without page table locks (in the }(hj%hhhNhNubh)}(h:c:func:`!pgd_clear`h]h pgd_clear()}(hj"%hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubh, }(hj%hhhNhNubh)}(h:c:func:`!p4d_clear`h]h p4d_clear()}(hj5%hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubh, }(hj%hhhNhNubh)}(h:c:func:`!pud_clear`h]h pud_clear()}(hjH%hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubh, and }(hj%hhhNhNubh)}(h:c:func:`!pmd_clear`h]h pmd_clear()}(hj[%hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubh functions.}(hj%hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjC$hhubjy)}(hIt is possible for leaf page tables to be torn down independent of the page tables above it as is done by :c:func:`!retract_page_tables`, which is performed under the i_mmap read lock, PMD, and PTE page table locks, without this level of care.h]h)}(hIt is possible for leaf page tables to be torn down independent of the page tables above it as is done by :c:func:`!retract_page_tables`, which is performed under the i_mmap read lock, PMD, and PTE page table locks, without this level of care.h](hjIt is possible for leaf page tables to be torn down independent of the page tables above it as is done by }(hjx%hhhNhNubh)}(h:c:func:`!retract_page_tables`h]hretract_page_tables()}(hj%hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjx%ubhk, which is performed under the i_mmap read lock, PMD, and PTE page table locks, without this level of care.}(hjx%hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjt%ubah}(h]h ]h"]h$]h&]uh1jxhjC$hhhhhNubeh}(h]page-table-freeingah ]h"]page table freeingah$]h&]uh1hhj4hhhhhMubh)}(hhh](h)}(hPage table movingh]hPage table moving}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hhhhhMubh)}(hSome functions manipulate page table levels above PMD (that is PUD, P4D and PGD page tables). Most notable of these is :c:func:`!mremap`, which is capable of moving higher level page tables.h](hwSome functions manipulate page table levels above PMD (that is PUD, P4D and PGD page tables). Most notable of these is }(hj%hhhNhNubh)}(h:c:func:`!mremap`h]hmremap()}(hj%hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubh6, which is capable of moving higher level page tables.}(hj%hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj%hhubh)}(hIn these instances, it is required that **all** locks are taken, that is the mmap lock, the VMA lock and the relevant rmap locks.h](h(In these instances, it is required that }(hj%hhhNhNubj)}(h**all**h]hall}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%ubhR locks are taken, that is the mmap lock, the VMA lock and the relevant rmap locks.}(hj%hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj%hhubh)}(hYou can observe this in the :c:func:`!mremap` implementation in the functions :c:func:`!take_rmap_locks` and :c:func:`!drop_rmap_locks` which perform the rmap side of lock acquisition, invoked ultimately by :c:func:`!move_page_tables`.h](hYou can observe this in the }(hj%hhhNhNubh)}(h:c:func:`!mremap`h]hmremap()}(hj&hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubh! implementation in the functions }(hj%hhhNhNubh)}(h:c:func:`!take_rmap_locks`h]htake_rmap_locks()}(hj&hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubh and }(hj%hhhNhNubh)}(h:c:func:`!drop_rmap_locks`h]hdrop_rmap_locks()}(hj'&hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubhH which perform the rmap side of lock acquisition, invoked ultimately by }(hj%hhhNhNubh)}(h:c:func:`!move_page_tables`h]hmove_page_tables()}(hj:&hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj%ubh.}(hj%hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj%hhubeh}(h]page-table-movingah ]h"]page table movingah$]h&]uh1hhj4hhhhhMubeh}(h]page-table-locking-detailsah ]h"]page table locking detailsah$]h&]uh1hhj hhhhhMubh)}(hhh](h)}(hVMA lock internalsh]hVMA lock internals}(hjf&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjc&hhhhhMubh)}(hhh](h)}(hOverviewh]hOverview}(hjw&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjt&hhhhhMubh)}(hVMA read locking is entirely optimistic - if the lock is contended or a competing write has started, then we do not obtain a read lock.h]hVMA read locking is entirely optimistic - if the lock is contended or a competing write has started, then we do not obtain a read lock.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjt&hhubh)}(hX&A VMA **read** lock is obtained by :c:func:`!lock_vma_under_rcu`, which first calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`, before releasing the RCU lock via :c:func:`!rcu_read_unlock`.h](hA VMA }(hj&hhhNhNubj)}(h**read**h]hread}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubh lock is obtained by }(hj&hhhNhNubh)}(h:c:func:`!lock_vma_under_rcu`h]hlock_vma_under_rcu()}(hj&hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj&ubh, which first calls }(hj&hhhNhNubh)}(h:c:func:`!rcu_read_lock`h]hrcu_read_lock()}(hj&hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj&ubhb to ensure that the VMA is looked up in an RCU critical section, then attempts to VMA lock it via }(hj&hhhNhNubh)}(h:c:func:`!vma_start_read`h]hvma_start_read()}(hj&hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj&ubh$, before releasing the RCU lock via }(hj&hhhNhNubh)}(h:c:func:`!rcu_read_unlock`h]hrcu_read_unlock()}(hj&hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj&ubh.}(hj&hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjt&hhubh)}(hXIn cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked` and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not fail due to lock contention but the caller should still check their return values in case they fail for other reasons.h](h5In cases when the user already holds mmap read lock, }(hj&hhhNhNubh)}(h :c:func:`!vma_start_read_locked`h]hvma_start_read_locked()}(hj'hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj&ubh and }(hj&hhhNhNubh)}(h':c:func:`!vma_start_read_locked_nested`h]hvma_start_read_locked_nested()}(hj'hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj&ubh can be used. These functions do not fail due to lock contention but the caller should still check their return values in case they fail for other reasons.}(hj&hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjt&hhubh)}(hVMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via :c:func:`!vma_end_read`.h](hVMA read locks increment }(hj3'hhhNhNubh)}(h:c:member:`!vma.vm_refcnt`h]h vma.vm_refcnt}(hj;'hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj3'ubh8 reference counter for their duration and the caller of }(hj3'hhhNhNubh)}(h:c:func:`!lock_vma_under_rcu`h]hlock_vma_under_rcu()}(hjN'hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj3'ubh must drop it via }(hj3'hhhNhNubh)}(h:c:func:`!vma_end_read`h]hvma_end_read()}(hja'hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj3'ubh.}(hj3'hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjt&hhubh)}(hX{VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always acquired. An mmap write lock **must** be held for the duration of the VMA write lock, releasing or downgrading the mmap write lock also releases the VMA write lock so there is no :c:func:`!vma_end_write` function.h](hVMA }(hjz'hhhNhNubj)}(h **write**h]hwrite}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjz'ubh locks are acquired via }(hjz'hhhNhNubh)}(h:c:func:`!vma_start_write`h]hvma_start_write()}(hj'hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjz'ubh: in instances where a VMA is about to be modified, unlike }(hjz'hhhNhNubh)}(h:c:func:`!vma_start_read`h]hvma_start_read()}(hj'hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjz'ubh1 the lock is always acquired. An mmap write lock }(hjz'hhhNhNubj)}(h**must**h]hmust}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjz'ubh be held for the duration of the VMA write lock, releasing or downgrading the mmap write lock also releases the VMA write lock so there is no }(hjz'hhhNhNubh)}(h:c:func:`!vma_end_write`h]hvma_end_write()}(hj'hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjz'ubh function.}(hjz'hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjt&hhubh)}(hNote that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily modified so that readers can detect the presense of a writer. The reference counter is restored once the vma sequence number used for serialisation is updated.h](h-Note that when write-locking a VMA lock, the }(hj'hhhNhNubh)}(h:c:member:`!vma.vm_refcnt`h]h vma.vm_refcnt}(hj'hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj'ubh is temporarily modified so that readers can detect the presense of a writer. The reference counter is restored once the vma sequence number used for serialisation is updated.}(hj'hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjt&hhubh)}(hbThis ensures the semantics we require - VMA write locks provide exclusive write access to the VMA.h]hbThis ensures the semantics we require - VMA write locks provide exclusive write access to the VMA.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjt&hhubeh}(h]overviewah ]h"]overviewah$]h&]uh1hhjc&hhhhhMubh)}(hhh](h)}(hImplementation detailsh]hImplementation details}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj(hhhhhMubh)}(hX The VMA lock mechanism is designed to be a lightweight means of avoiding the use of the heavily contended mmap lock. It is implemented using a combination of a reference counter and sequence numbers belonging to the containing :c:struct:`!struct mm_struct` and the VMA.h](hThe VMA lock mechanism is designed to be a lightweight means of avoiding the use of the heavily contended mmap lock. It is implemented using a combination of a reference counter and sequence numbers belonging to the containing }(hj-(hhhNhNubh)}(h:c:struct:`!struct mm_struct`h]hstruct mm_struct}(hj5(hhhNhNubah}(h]h ](jjc-structeh"]h$]h&]uh1hhj-(ubh and the VMA.}(hj-(hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hX Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic operation, i.e. it tries to acquire a read lock but returns false if it is unable to do so. At the end of the read operation, :c:func:`!vma_end_read` is called to release the VMA read lock.h](hRead locks are acquired via }(hjN(hhhNhNubh)}(h:c:func:`!vma_start_read`h]hvma_start_read()}(hjV(hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjN(ubh, which is an optimistic operation, i.e. it tries to acquire a read lock but returns false if it is unable to do so. At the end of the read operation, }(hjN(hhhNhNubh)}(h:c:func:`!vma_end_read`h]hvma_end_read()}(hji(hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjN(ubh( is called to release the VMA read lock.}(hjN(hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hXaInvoking :c:func:`!vma_start_read` requires that :c:func:`!rcu_read_lock` has been called first, establishing that we are in an RCU critical section upon VMA read lock acquisition. Once acquired, the RCU lock can be released as it is only required for lookup. This is abstracted by :c:func:`!lock_vma_under_rcu` which is the interface a user should use.h](h Invoking }(hj(hhhNhNubh)}(h:c:func:`!vma_start_read`h]hvma_start_read()}(hj(hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj(ubh requires that }(hj(hhhNhNubh)}(h:c:func:`!rcu_read_lock`h]hrcu_read_lock()}(hj(hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj(ubh has been called first, establishing that we are in an RCU critical section upon VMA read lock acquisition. Once acquired, the RCU lock can be released as it is only required for lookup. This is abstracted by }(hj(hhhNhNubh)}(h:c:func:`!lock_vma_under_rcu`h]hlock_vma_under_rcu()}(hj(hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj(ubh* which is the interface a user should use.}(hj(hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hWriting requires the mmap to be write-locked and the VMA lock to be acquired via :c:func:`!vma_start_write`, however the write lock is released by the termination or downgrade of the mmap write lock so no :c:func:`!vma_end_write` is required.h](hQWriting requires the mmap to be write-locked and the VMA lock to be acquired via }(hj(hhhNhNubh)}(h:c:func:`!vma_start_write`h]hvma_start_write()}(hj(hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj(ubhb, however the write lock is released by the termination or downgrade of the mmap write lock so no }(hj(hhhNhNubh)}(h:c:func:`!vma_end_write`h]hvma_end_write()}(hj(hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj(ubh is required.}(hj(hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hAll this is achieved by the use of per-mm and per-VMA sequence counts, which are used in order to reduce complexity, especially for operations which write-lock multiple VMAs at once.h]hAll this is achieved by the use of per-mm and per-VMA sequence counts, which are used in order to reduce complexity, especially for operations which write-lock multiple VMAs at once.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hIf the mm sequence count, :c:member:`!mm->mm_lock_seq` is equal to the VMA sequence count :c:member:`!vma->vm_lock_seq` then the VMA is write-locked. If they differ, then it is not.h](hIf the mm sequence count, }(hj )hhhNhNubh)}(h:c:member:`!mm->mm_lock_seq`h]hmm->mm_lock_seq}(hj)hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj )ubh$ is equal to the VMA sequence count }(hj )hhhNhNubh)}(h:c:member:`!vma->vm_lock_seq`h]hvma->vm_lock_seq}(hj&)hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj )ubh> then the VMA is write-locked. If they differ, then it is not.}(hj )hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hEach time the mmap write lock is released in :c:func:`!mmap_write_unlock` or :c:func:`!mmap_write_downgrade`, :c:func:`!vma_end_write_all` is invoked which also increments :c:member:`!mm->mm_lock_seq` via :c:func:`!mm_lock_seqcount_end`.h](h-Each time the mmap write lock is released in }(hj?)hhhNhNubh)}(h:c:func:`!mmap_write_unlock`h]hmmap_write_unlock()}(hjG)hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj?)ubh or }(hj?)hhhNhNubh)}(h:c:func:`!mmap_write_downgrade`h]hmmap_write_downgrade()}(hjZ)hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj?)ubh, }(hj?)hhhNhNubh)}(h:c:func:`!vma_end_write_all`h]hvma_end_write_all()}(hjm)hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj?)ubh" is invoked which also increments }(hj?)hhhNhNubh)}(h:c:member:`!mm->mm_lock_seq`h]hmm->mm_lock_seq}(hj)hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj?)ubh via }(hj?)hhhNhNubh)}(h:c:func:`!mm_lock_seqcount_end`h]hmm_lock_seqcount_end()}(hj)hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj?)ubh.}(hj?)hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hThis way, we ensure that, regardless of the VMA's sequence number, a write lock is never incorrectly indicated and that when we release an mmap write lock we efficiently release **all** VMA write locks contained within the mmap at the same time.h](hThis way, we ensure that, regardless of the VMA’s sequence number, a write lock is never incorrectly indicated and that when we release an mmap write lock we efficiently release }(hj)hhhNhNubj)}(h**all**h]hall}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubh< VMA write locks contained within the mmap at the same time.}(hj)hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM hj(hhubh)}(hXSince the mmap write lock is exclusive against others who hold it, the automatic release of any VMA locks on its release makes sense, as you would never want to keep VMAs locked across entirely separate write operations. It also maintains correct lock ordering.h]hXSince the mmap write lock is exclusive against others who hold it, the automatic release of any VMA locks on its release makes sense, as you would never want to keep VMAs locked across entirely separate write operations. It also maintains correct lock ordering.}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hEach time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt` reference counter and check that the sequence count of the VMA does not match that of the mm.h](h4Each time a VMA read lock is acquired, we increment }(hj)hhhNhNubh)}(h:c:member:`!vma.vm_refcnt`h]h vma.vm_refcnt}(hj)hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj)ubh^ reference counter and check that the sequence count of the VMA does not match that of the mm.}(hj)hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hIf it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped. If it does not, we keep the reference counter raised, excluding writers, but permitting other readers, who can also obtain this lock under RCU.h](h$If it does, the read lock fails and }(hj)hhhNhNubh)}(h:c:member:`!vma.vm_refcnt`h]h vma.vm_refcnt}(hj*hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj)ubh is dropped. If it does not, we keep the reference counter raised, excluding writers, but permitting other readers, who can also obtain this lock under RCU.}(hj)hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hImportantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu` are also RCU safe, so the whole read lock operation is guaranteed to function correctly.h](h0Importantly, maple tree operations performed in }(hj*hhhNhNubh)}(h:c:func:`!lock_vma_under_rcu`h]hlock_vma_under_rcu()}(hj$*hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj*ubhY are also RCU safe, so the whole read lock operation is guaranteed to function correctly.}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hX#On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be modified by readers and wait for all readers to drop their reference count. Once there are no readers, the VMA's sequence number is set to match that of the mm. During this entire operation mmap write lock is held.h](h#On the write side, we set a bit in }(hj=*hhhNhNubh)}(h:c:member:`!vma.vm_refcnt`h]h vma.vm_refcnt}(hjE*hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj=*ubh which can’t be modified by readers and wait for all readers to drop their reference count. Once there are no readers, the VMA’s sequence number is set to match that of the mm. During this entire operation mmap write lock is held.}(hj=*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj(hhubh)}(hThis way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep until these are finished and mutual exclusion is achieved.h](h+This way, if any read locks are in effect, }(hj^*hhhNhNubh)}(h:c:func:`!vma_start_write`h]hvma_start_write()}(hjf*hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj^*ubhF will sleep until these are finished and mutual exclusion is achieved.}(hj^*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM$hj(hhubh)}(hAfter setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt` indicating a writer is cleared. From this point on, VMA's sequence number will indicate VMA's write-locked state until mmap write lock is dropped or downgraded.h](h6After setting the VMA’s sequence number, the bit in }(hj*hhhNhNubh)}(h:c:member:`!vma.vm_refcnt`h]h vma.vm_refcnt}(hj*hhhNhNubah}(h]h ](jjc-membereh"]h$]h&]uh1hhj*ubh indicating a writer is cleared. From this point on, VMA’s sequence number will indicate VMA’s write-locked state until mmap write lock is dropped or downgraded.}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM'hj(hhubh)}(hThis clever combination of a reference counter and sequence count allows for fast RCU-based per-VMA lock acquisition (especially on page fault, though utilised elsewhere) with minimal complexity around lock ordering.h]hThis clever combination of a reference counter and sequence count allows for fast RCU-based per-VMA lock acquisition (especially on page fault, though utilised elsewhere) with minimal complexity around lock ordering.}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM+hj(hhubeh}(h]implementation-detailsah ]h"]implementation detailsah$]h&]uh1hhjc&hhhhhMubeh}(h]vma-lock-internalsah ]h"]vma lock internalsah$]h&]uh1hhj hhhhhMubh)}(hhh](h)}(hmmap write lock downgradingh]hmmap write lock downgrading}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj*hhhhhM0ubh)}(hWhen an mmap write lock is held one has exclusive access to resources within the mmap (with the usual caveats about requiring VMA write locks to avoid races with tasks holding VMA read locks).h]hWhen an mmap write lock is held one has exclusive access to resources within the mmap (with the usual caveats about requiring VMA write locks to avoid races with tasks holding VMA read locks).}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM2hj*hhubh)}(hXeIt is then possible to **downgrade** from a write lock to a read lock via :c:func:`!mmap_write_downgrade` which, similar to :c:func:`!mmap_write_unlock`, implicitly terminates all VMA write locks via :c:func:`!vma_end_write_all`, but importantly does not relinquish the mmap lock while downgrading, therefore keeping the locked virtual address space stable.h](hIt is then possible to }(hj*hhhNhNubj)}(h **downgrade**h]h downgrade}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh& from a write lock to a read lock via }(hj*hhhNhNubh)}(h:c:func:`!mmap_write_downgrade`h]hmmap_write_downgrade()}(hj*hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj*ubh which, similar to }(hj*hhhNhNubh)}(h:c:func:`!mmap_write_unlock`h]hmmap_write_unlock()}(hj +hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj*ubh0, implicitly terminates all VMA write locks via }(hj*hhhNhNubh)}(h:c:func:`!vma_end_write_all`h]hvma_end_write_all()}(hj+hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhj*ubh, but importantly does not relinquish the mmap lock while downgrading, therefore keeping the locked virtual address space stable.}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM6hj*hhubh)}(hX8An interesting consequence of this is that downgraded locks are exclusive against any other task possessing a downgraded lock (since a racing task would have to acquire a write lock first to downgrade it, and the downgraded lock prevents a new write lock from being obtained until the original lock is released).h]hX8An interesting consequence of this is that downgraded locks are exclusive against any other task possessing a downgraded lock (since a racing task would have to acquire a write lock first to downgrade it, and the downgraded lock prevents a new write lock from being obtained until the original lock is released).}(hj6+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM<hj*hhubh)}(h}For clarity, we map read (R)/downgraded write (D)/write (W) locks against one another showing which locks exclude the others:h]h}For clarity, we map read (R)/downgraded write (D)/write (W) locks against one another showing which locks exclude the others:}(hjD+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMBhj*hhubjm)}(hhh](h)}(hLock exclusivityh]hLock exclusivity}(hjU+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhjR+ubjr)}(hhh](jw)}(hhh]h}(h]h ]h"]h$]h&]colwidthKstubKuh1jvhjc+ubjw)}(hhh]h}(h]h ]h"]h$]h&]jo+Kuh1jvhjc+ubjw)}(hhh]h}(h]h ]h"]h$]h&]jo+Kuh1jvhjc+ubjw)}(hhh]h}(h]h ]h"]h$]h&]jo+Kuh1jvhjc+ubj)}(hhh]j)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh]h)}(hjBh]hR}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhj+ubah}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh]h)}(hDh]hD}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhj+ubah}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh]h)}(hj h]hW}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMMhj+ubah}(h]h ]h"]h$]h&]uh1jhj+ubeh}(h]h ]h"]h$]h&]uh1jhj+ubah}(h]h ]h"]h$]h&]uh1jhjc+ubjx)}(hhh](j)}(hhh](j)}(hhh]h)}(hjBh]hR}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMNhj+ubah}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh]h)}(hjh]hN}(hj ,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMOhj,ubah}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh]h)}(hjh]hN}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMPhj,ubah}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh]h)}(hjph]hY}(hj5,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMQhj2,ubah}(h]h ]h"]h$]h&]uh1jhj+ubeh}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh](j)}(hhh]h)}(hj+h]hD}(hjT,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMRhjQ,ubah}(h]h ]h"]h$]h&]uh1jhjN,ubj)}(hhh]h)}(hjh]hN}(hjj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMShjg,ubah}(h]h ]h"]h$]h&]uh1jhjN,ubj)}(hhh]h)}(hjph]hY}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMThj},ubah}(h]h ]h"]h$]h&]uh1jhjN,ubj)}(hhh]h)}(hjph]hY}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMUhj,ubah}(h]h ]h"]h$]h&]uh1jhjN,ubeh}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh](j)}(hhh]h)}(hj h]hW}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMVhj,ubah}(h]h ]h"]h$]h&]uh1jhj,ubj)}(hhh]h)}(hjph]hY}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMWhj,ubah}(h]h ]h"]h$]h&]uh1jhj,ubj)}(hhh]h)}(hjph]hY}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMXhj,ubah}(h]h ]h"]h$]h&]uh1jhj,ubj)}(hhh]h)}(hjph]hY}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMYhj,ubah}(h]h ]h"]h$]h&]uh1jhj,ubeh}(h]h ]h"]h$]h&]uh1jhj+ubeh}(h]h ]h"]h$]h&]uh1jwhjc+ubeh}(h]h ]h"]h$]h&]colsKuh1jqhjR+ubeh}(h]id5ah ]colwidths-givenah"]h$]h&]uh1jlhj*hhhNhNubh)}(hrHere a Y indicates the locks in the matching row/column are mutually exclusive, and N indicates that they are not.h]hrHere a Y indicates the locks in the matching row/column are mutually exclusive, and N indicates that they are not.}(hj%-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM[hj*hhubeh}(h]mmap-write-lock-downgradingah ]h"]mmap write lock downgradingah$]h&]uh1hhj hhhhhM0ubh)}(hhh](h)}(hStack expansionh]hStack expansion}(hj>-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj;-hhhhhM_ubh)}(hStack expansion throws up additional complexities in that we cannot permit there to be racing page faults, as a result we invoke :c:func:`!vma_start_write` to prevent this in :c:func:`!expand_downwards` or :c:func:`!expand_upwards`.h](hStack expansion throws up additional complexities in that we cannot permit there to be racing page faults, as a result we invoke }(hjL-hhhNhNubh)}(h:c:func:`!vma_start_write`h]hvma_start_write()}(hjT-hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjL-ubh to prevent this in }(hjL-hhhNhNubh)}(h:c:func:`!expand_downwards`h]hexpand_downwards()}(hjg-hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjL-ubh or }(hjL-hhhNhNubh)}(h:c:func:`!expand_upwards`h]hexpand_upwards()}(hjz-hhhNhNubah}(h]h ](jjc-funceh"]h$]h&]uh1hhjL-ubh.}(hjL-hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMahj;-hhubeh}(h]stack-expansionah ]h"]stack expansionah$]h&]uh1hhj hhhhhM_ubeh}(h]locking-implementation-detailsah ]h"]locking implementation detailsah$]h&]uh1hhhhhhhhMubeh}(h]process-addressesah ]h"]process addressesah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksjfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerj-error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(j-j-jjjjjjjjjjjjj-j-j`&j]&j@j=j!j!j@$j=$j%j%jX&jU&j*j*j(j(j*j*j8-j5-j-j-u nametypes}(j-jjjjjjj-j`&j@j!j@$j%jX&j*j(j*j8-j-uh}(j-hjjjjjjjj jjjj"j-j j]&j4j=j0j!jCj=$j!j%jC$jU&j%j*jc&j(jt&j*j(j5-j*j-j;-j j jZj jjmjjj-jR+u footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}j-KsRparse_messages]transform_messages] transformerN include_log] decorationNhhub.