sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget./translations/zh_CN/admin-guide/mm/userfaultfdmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/zh_TW/admin-guide/mm/userfaultfdmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/it_IT/admin-guide/mm/userfaultfdmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/ja_JP/admin-guide/mm/userfaultfdmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/ko_KR/admin-guide/mm/userfaultfdmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/sp_SP/admin-guide/mm/userfaultfdmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhsection)}(hhh](htitle)}(h Userfaultfdh]h Userfaultfd}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhH/var/lib/git/docbuild/linux/Documentation/admin-guide/mm/userfaultfd.rsthKubh)}(hhh](h)}(h Objectiveh]h Objective}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh paragraph)}(hUserfaults allow the implementation of on-demand paging from userland and more generally they allow userland to take control of various memory page faults, something otherwise only the kernel code could do.h]hUserfaults allow the implementation of on-demand paging from userland and more generally they allow userland to take control of various memory page faults, something otherwise only the kernel code could do.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hjFor example userfaults allows a proper and more optimal implementation of the ``PROT_NONE+SIGSEGV`` trick.h](hNFor example userfaults allows a proper and more optimal implementation of the }(hhhhhNhNubhliteral)}(h``PROT_NONE+SIGSEGV``h]hPROT_NONE+SIGSEGV}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhubh trick.}(hhhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK hhhhubeh}(h] objectiveah ]h"] objectiveah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hDesignh]hDesign}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXUserspace creates a new userfaultfd, initializes it, and registers one or more regions of virtual memory with it. Then, any page faults which occur within the region(s) result in a message being delivered to the userfaultfd, notifying userspace of the fault.h]hXUserspace creates a new userfaultfd, initializes it, and registers one or more regions of virtual memory with it. Then, any page faults which occur within the region(s) result in a message being delivered to the userfaultfd, notifying userspace of the fault.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hzThe ``userfaultfd`` (aside from registering and unregistering virtual memory ranges) provides two primary functionalities:h](hThe }(hj!hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!ubhg (aside from registering and unregistering virtual memory ranges) provides two primary functionalities:}(hj!hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubhenumerated_list)}(hhh](h list_item)}(hM``read/POLLIN`` protocol to notify a userland thread of the faults happening h]h)}(hL``read/POLLIN`` protocol to notify a userland thread of the faults happeningh](h)}(h``read/POLLIN``h]h read/POLLIN}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjLubh= protocol to notify a userland thread of the faults happening}(hjLhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjHubah}(h]h ]h"]h$]h&]uh1jFhjChhhhhNubjG)}(hvarious ``UFFDIO_*`` ioctls that can manage the virtual memory regions registered in the ``userfaultfd`` that allows userland to efficiently resolve the userfaults it receives via 1) or to manage the virtual memory in the background h]h)}(hvarious ``UFFDIO_*`` ioctls that can manage the virtual memory regions registered in the ``userfaultfd`` that allows userland to efficiently resolve the userfaults it receives via 1) or to manage the virtual memory in the backgroundh](hvarious }(hjrhhhNhNubh)}(h ``UFFDIO_*``h]hUFFDIO_*}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjrubhE ioctls that can manage the virtual memory regions registered in the }(hjrhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjrubh that allows userland to efficiently resolve the userfaults it receives via 1) or to manage the virtual memory in the background}(hjrhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjnubah}(h]h ]h"]h$]h&]uh1jFhjChhhhhNubeh}(h]h ]h"]h$]h&]enumtypearabicprefixhsuffix)uh1jAhjhhhhhKubh)}(hXThe real advantage of userfaults if compared to regular virtual memory management of mremap/mprotect is that the userfaults in all their operations never involve heavyweight structures like vmas (in fact the ``userfaultfd`` runtime load never takes the mmap_lock for writing). Vmas are not suitable for page- (or hugepage) granular fault tracking when dealing with virtual address spaces that could span Terabytes. Too many vmas would be needed for that.h](hThe real advantage of userfaults if compared to regular virtual memory management of mremap/mprotect is that the userfaults in all their operations never involve heavyweight structures like vmas (in fact the }(hjhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh runtime load never takes the mmap_lock for writing). Vmas are not suitable for page- (or hugepage) granular fault tracking when dealing with virtual address spaces that could span Terabytes. Too many vmas would be needed for that.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK"hjhhubh)}(hXThe ``userfaultfd``, once created, can also be passed using unix domain sockets to a manager process, so the same manager process could handle the userfaults of a multitude of different processes without them being aware about what is going on (well of course unless they later try to use the ``userfaultfd`` themselves on the same region the manager is already tracking, which is a corner case that would currently return ``-EBUSY``).h](hThe }(hjhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubhX, once created, can also be passed using unix domain sockets to a manager process, so the same manager process could handle the userfaults of a multitude of different processes without them being aware about what is going on (well of course unless they later try to use the }(hjhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubhs themselves on the same region the manager is already tracking, which is a corner case that would currently return }(hjhhhNhNubh)}(h ``-EBUSY``h]h-EBUSY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK*hjhhubeh}(h]designah ]h"]designah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hAPIh]hAPI}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!hhhhhK3ubh)}(hhh](h)}(hCreating a userfaultfdh]hCreating a userfaultfd}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj2hhhhhK6ubh)}(hThere are two ways to create a new userfaultfd, each of which provide ways to restrict access to this functionality (since historically userfaultfds which handle kernel page faults have been a useful tool for exploiting the kernel).h]hThere are two ways to create a new userfaultfd, each of which provide ways to restrict access to this functionality (since historically userfaultfds which handle kernel page faults have been a useful tool for exploiting the kernel).}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK8hj2hhubh)}(hThe first way, supported since userfaultfd was introduced, is the userfaultfd(2) syscall. Access to this is controlled in several ways:h]hThe first way, supported since userfaultfd was introduced, is the userfaultfd(2) syscall. Access to this is controlled in several ways:}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK atomically copies some existing page contents from userspace.}(hjChhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj?ubah}(h]h ]h"]h$]h&]uh1jFhj<hhhhhNubjG)}(h3``UFFDIO_ZEROPAGE`` atomically zeros the new page. h]h)}(h2``UFFDIO_ZEROPAGE`` atomically zeros the new page.h](h)}(h``UFFDIO_ZEROPAGE``h]hUFFDIO_ZEROPAGE}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjiubh atomically zeros the new page.}(hjihhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjeubah}(h]h ]h"]h$]h&]uh1jFhj<hhhhhNubjG)}(hA``UFFDIO_CONTINUE`` maps an existing, previously-populated page. h]h)}(h@``UFFDIO_CONTINUE`` maps an existing, previously-populated page.h](h)}(h``UFFDIO_CONTINUE``h]hUFFDIO_CONTINUE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh- maps an existing, previously-populated page.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jFhj<hhhhhNubeh}(h]h ]h"]h$]h&]jjuh1j_hhhKhjhhubh)}(hThese operations are atomic in the sense that they guarantee nothing can see a half-populated page, since readers will keep userfaulting until the operation has finished.h]hThese operations are atomic in the sense that they guarantee nothing can see a half-populated page, since readers will keep userfaulting until the operation has finished.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hBy default, these wake up userfaults blocked on the range in question. They support a ``UFFDIO_*_MODE_DONTWAKE`` ``mode`` flag, which indicates that waking will be done separately at some later time.h](hVBy default, these wake up userfaults blocked on the range in question. They support a }(hjhhhNhNubh)}(h``UFFDIO_*_MODE_DONTWAKE``h]hUFFDIO_*_MODE_DONTWAKE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh }(hjhhhNhNubh)}(h``mode``h]hmode}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubhN flag, which indicates that waking will be done separately at some later time.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(h`Which ioctl to choose depends on the kind of page fault, and what we'd like to do to resolve it:h]hbWhich ioctl to choose depends on the kind of page fault, and what we’d like to do to resolve it:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj`)}(hhh](jG)}(hXWFor ``UFFDIO_REGISTER_MODE_MISSING`` faults, the fault needs to be resolved by either providing a new page (``UFFDIO_COPY``), or mapping the zero page (``UFFDIO_ZEROPAGE``). By default, the kernel would map the zero page for a missing fault. With userfaultfd, userspace can decide what content to provide before the faulting thread continues. h]h)}(hXVFor ``UFFDIO_REGISTER_MODE_MISSING`` faults, the fault needs to be resolved by either providing a new page (``UFFDIO_COPY``), or mapping the zero page (``UFFDIO_ZEROPAGE``). By default, the kernel would map the zero page for a missing fault. With userfaultfd, userspace can decide what content to provide before the faulting thread continues.h](hFor }(hj hhhNhNubh)}(h ``UFFDIO_REGISTER_MODE_MISSING``h]hUFFDIO_REGISTER_MODE_MISSING}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubhH faults, the fault needs to be resolved by either providing a new page (}(hj hhhNhNubh)}(h``UFFDIO_COPY``h]h UFFDIO_COPY}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh), or mapping the zero page (}(hj hhhNhNubh)}(h``UFFDIO_ZEROPAGE``h]hUFFDIO_ZEROPAGE}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh). By default, the kernel would map the zero page for a missing fault. With userfaultfd, userspace can decide what content to provide before the faulting thread continues.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jFhjhhhhhNubjG)}(hXJFor ``UFFDIO_REGISTER_MODE_MINOR`` faults, there is an existing page (in the page cache). Userspace has the option of modifying the page's contents before resolving the fault. Once the contents are correct (modified or not), userspace asks the kernel to map the page and let the faulting thread continue with ``UFFDIO_CONTINUE``. h]h)}(hXIFor ``UFFDIO_REGISTER_MODE_MINOR`` faults, there is an existing page (in the page cache). Userspace has the option of modifying the page's contents before resolving the fault. Once the contents are correct (modified or not), userspace asks the kernel to map the page and let the faulting thread continue with ``UFFDIO_CONTINUE``.h](hFor }(hjZhhhNhNubh)}(h``UFFDIO_REGISTER_MODE_MINOR``h]hUFFDIO_REGISTER_MODE_MINOR}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjZubhX faults, there is an existing page (in the page cache). Userspace has the option of modifying the page’s contents before resolving the fault. Once the contents are correct (modified or not), userspace asks the kernel to map the page and let the faulting thread continue with }(hjZhhhNhNubh)}(h``UFFDIO_CONTINUE``h]hUFFDIO_CONTINUE}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhjZubh.}(hjZhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjVubah}(h]h ]h"]h$]h&]uh1jFhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1j_hhhKhjhhubh)}(hNotes:h]hNotes:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj`)}(hhh](jG)}(hYou can tell which kind of fault occurred by examining ``pagefault.flags`` within the ``uffd_msg``, checking for the ``UFFD_PAGEFAULT_FLAG_*`` flags. h]h)}(hYou can tell which kind of fault occurred by examining ``pagefault.flags`` within the ``uffd_msg``, checking for the ``UFFD_PAGEFAULT_FLAG_*`` flags.h](h7You can tell which kind of fault occurred by examining }(hjhhhNhNubh)}(h``pagefault.flags``h]hpagefault.flags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh within the }(hjhhhNhNubh)}(h ``uffd_msg``h]huffd_msg}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh, checking for the }(hjhhhNhNubh)}(h``UFFD_PAGEFAULT_FLAG_*``h]hUFFD_PAGEFAULT_FLAG_*}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh flags.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jFhjhhhhhNubjG)}(hNone of the page-delivering ioctls default to the range that you registered with. You must fill in all fields for the appropriate ioctl struct including the range. h]h)}(hNone of the page-delivering ioctls default to the range that you registered with. You must fill in all fields for the appropriate ioctl struct including the range.h]hNone of the page-delivering ioctls default to the range that you registered with. You must fill in all fields for the appropriate ioctl struct including the range.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jFhjhhhhhNubjG)}(hX8You get the address of the access that triggered the missing page event out of a struct uffd_msg that you read in the thread from the uffd. You can supply as many pages as you want with these IOCTLs. Keep in mind that unless you used DONTWAKE then the first of any of those IOCTLs wakes up the faulting thread. h]h)}(hX7You get the address of the access that triggered the missing page event out of a struct uffd_msg that you read in the thread from the uffd. You can supply as many pages as you want with these IOCTLs. Keep in mind that unless you used DONTWAKE then the first of any of those IOCTLs wakes up the faulting thread.h]hX7You get the address of the access that triggered the missing page event out of a struct uffd_msg that you read in the thread from the uffd. You can supply as many pages as you want with these IOCTLs. Keep in mind that unless you used DONTWAKE then the first of any of those IOCTLs wakes up the faulting thread.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jFhjhhhhhNubjG)}(hBe sure to test for all errors including (``pollfd[0].revents & POLLERR``). This can happen, e.g. when ranges supplied were incorrect. h]h)}(hBe sure to test for all errors including (``pollfd[0].revents & POLLERR``). This can happen, e.g. when ranges supplied were incorrect.h](h*Be sure to test for all errors including (}(hj+hhhNhNubh)}(h``pollfd[0].revents & POLLERR``h]hpollfd[0].revents & POLLERR}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj+ubh>). This can happen, e.g. when ranges supplied were incorrect.}(hj+hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj'ubah}(h]h ]h"]h$]h&]uh1jFhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1j_hhhKhjhhubeh}(h]resolving-userfaultsah ]h"]resolving userfaultsah$]h&]uh1hhj!hhhhhKubh)}(hhh](h)}(hWrite Protect Notificationsh]hWrite Protect Notifications}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj_hhhhhKubh)}(hTThis is equivalent to (but faster than) using mprotect and a SIGSEGV signal handler.h]hTThis is equivalent to (but faster than) using mprotect and a SIGSEGV signal handler.}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh)}(hXaFirstly you need to register a range with ``UFFDIO_REGISTER_MODE_WP``. Instead of using mprotect(2) you use ``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)`` while ``mode = UFFDIO_WRITEPROTECT_MODE_WP`` in the struct passed in. The range does not default to and does not have to be identical to the range you registered with. You can write protect as many ranges as you like (inside the registered range). Then, in the thread reading from uffd the struct will have ``msg.arg.pagefault.flags & UFFD_PAGEFAULT_FLAG_WP`` set. Now you send ``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)`` again while ``pagefault.mode`` does not have ``UFFDIO_WRITEPROTECT_MODE_WP`` set. This wakes up the thread which will continue to run with writes. This allows you to do the bookkeeping about the write in the uffd reading thread before the ioctl.h](h*Firstly you need to register a range with }(hj~hhhNhNubh)}(h``UFFDIO_REGISTER_MODE_WP``h]hUFFDIO_REGISTER_MODE_WP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~ubh'. Instead of using mprotect(2) you use }(hj~hhhNhNubh)}(hA``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)``h]h=ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~ubh while }(hj~hhhNhNubh)}(h&``mode = UFFDIO_WRITEPROTECT_MODE_WP``h]h"mode = UFFDIO_WRITEPROTECT_MODE_WP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~ubhX  in the struct passed in. The range does not default to and does not have to be identical to the range you registered with. You can write protect as many ranges as you like (inside the registered range). Then, in the thread reading from uffd the struct will have }(hj~hhhNhNubh)}(h4``msg.arg.pagefault.flags & UFFD_PAGEFAULT_FLAG_WP``h]h0msg.arg.pagefault.flags & UFFD_PAGEFAULT_FLAG_WP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~ubh set. Now you send }(hj~hhhNhNubh)}(hA``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)``h]h=ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~ubh again while }(hj~hhhNhNubh)}(h``pagefault.mode``h]hpagefault.mode}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~ubh does not have }(hj~hhhNhNubh)}(h``UFFDIO_WRITEPROTECT_MODE_WP``h]hUFFDIO_WRITEPROTECT_MODE_WP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~ubh set. This wakes up the thread which will continue to run with writes. This allows you to do the bookkeeping about the write in the uffd reading thread before the ioctl.}(hj~hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh)}(hXIf you registered with both ``UFFDIO_REGISTER_MODE_MISSING`` and ``UFFDIO_REGISTER_MODE_WP`` then you need to think about the sequence in which you supply a page and undo write protect. Note that there is a difference between writes into a WP area and into a !WP area. The former will have ``UFFD_PAGEFAULT_FLAG_WP`` set, the latter ``UFFD_PAGEFAULT_FLAG_WRITE``. The latter did not fail on protection but you still need to supply a page when ``UFFDIO_REGISTER_MODE_MISSING`` was used.h](hIf you registered with both }(hj hhhNhNubh)}(h ``UFFDIO_REGISTER_MODE_MISSING``h]hUFFDIO_REGISTER_MODE_MISSING}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh and }(hj hhhNhNubh)}(h``UFFDIO_REGISTER_MODE_WP``h]hUFFDIO_REGISTER_MODE_WP}(hj$ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh then you need to think about the sequence in which you supply a page and undo write protect. Note that there is a difference between writes into a WP area and into a !WP area. The former will have }(hj hhhNhNubh)}(h``UFFD_PAGEFAULT_FLAG_WP``h]hUFFD_PAGEFAULT_FLAG_WP}(hj6 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh set, the latter }(hj hhhNhNubh)}(h``UFFD_PAGEFAULT_FLAG_WRITE``h]hUFFD_PAGEFAULT_FLAG_WRITE}(hjH hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubhR. The latter did not fail on protection but you still need to supply a page when }(hj hhhNhNubh)}(h ``UFFDIO_REGISTER_MODE_MISSING``h]hUFFDIO_REGISTER_MODE_MISSING}(hjZ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh was used.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh)}(hUserfaultfd write-protect mode currently behave differently on none ptes (when e.g. page is missing) over different types of memories.h]hUserfaultfd write-protect mode currently behave differently on none ptes (when e.g. page is missing) over different types of memories.}(hjr hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh)}(hXFor anonymous memory, ``ioctl(UFFDIO_WRITEPROTECT)`` will ignore none ptes (e.g. when pages are missing and not populated). For file-backed memories like shmem and hugetlbfs, none ptes will be write protected just like a present pte. In other words, there will be a userfaultfd write fault message generated when writing to a missing page on file typed memories, as long as the page range was write-protected before. Such a message will not be generated on anonymous memories by default.h](hFor anonymous memory, }(hj hhhNhNubh)}(h``ioctl(UFFDIO_WRITEPROTECT)``h]hioctl(UFFDIO_WRITEPROTECT)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubhX will ignore none ptes (e.g. when pages are missing and not populated). For file-backed memories like shmem and hugetlbfs, none ptes will be write protected just like a present pte. In other words, there will be a userfaultfd write fault message generated when writing to a missing page on file typed memories, as long as the page range was write-protected before. Such a message will not be generated on anonymous memories by default.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh)}(hXSIf the application wants to be able to write protect none ptes on anonymous memory, one can pre-populate the memory with e.g. MADV_POPULATE_READ. On newer kernels, one can also detect the feature UFFD_FEATURE_WP_UNPOPULATED and set the feature bit in advance to make sure none ptes will also be write protected even upon anonymous memory.h]hXSIf the application wants to be able to write protect none ptes on anonymous memory, one can pre-populate the memory with e.g. MADV_POPULATE_READ. On newer kernels, one can also detect the feature UFFD_FEATURE_WP_UNPOPULATED and set the feature bit in advance to make sure none ptes will also be write protected even upon anonymous memory.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh)}(hXWhen using ``UFFDIO_REGISTER_MODE_WP`` in combination with either ``UFFDIO_REGISTER_MODE_MISSING`` or ``UFFDIO_REGISTER_MODE_MINOR``, when resolving missing / minor faults with ``UFFDIO_COPY`` or ``UFFDIO_CONTINUE`` respectively, it may be desirable for the new page / mapping to be write-protected (so future writes will also result in a WP fault). These ioctls support a mode flag (``UFFDIO_COPY_MODE_WP`` or ``UFFDIO_CONTINUE_MODE_WP`` respectively) to configure the mapping this way.h](h When using }(hj hhhNhNubh)}(h``UFFDIO_REGISTER_MODE_WP``h]hUFFDIO_REGISTER_MODE_WP}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh in combination with either }(hj hhhNhNubh)}(h ``UFFDIO_REGISTER_MODE_MISSING``h]hUFFDIO_REGISTER_MODE_MISSING}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh or }(hj hhhNhNubh)}(h``UFFDIO_REGISTER_MODE_MINOR``h]hUFFDIO_REGISTER_MODE_MINOR}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh-, when resolving missing / minor faults with }(hj hhhNhNubh)}(h``UFFDIO_COPY``h]h UFFDIO_COPY}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh or }hj sbh)}(h``UFFDIO_CONTINUE``h]hUFFDIO_CONTINUE}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh respectively, it may be desirable for the new page / mapping to be write-protected (so future writes will also result in a WP fault). These ioctls support a mode flag (}(hj hhhNhNubh)}(h``UFFDIO_COPY_MODE_WP``h]hUFFDIO_COPY_MODE_WP}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh or }hj sbh)}(h``UFFDIO_CONTINUE_MODE_WP``h]hUFFDIO_CONTINUE_MODE_WP}(hj" hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh1 respectively) to configure the mapping this way.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh)}(hIf the userfaultfd context has ``UFFD_FEATURE_WP_ASYNC`` feature bit set, any vma registered with write-protection will work in async mode rather than the default sync mode.h](hIf the userfaultfd context has }(hj: hhhNhNubh)}(h``UFFD_FEATURE_WP_ASYNC``h]hUFFD_FEATURE_WP_ASYNC}(hjB hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj: ubhu feature bit set, any vma registered with write-protection will work in async mode rather than the default sync mode.}(hj: hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh)}(hXIn async mode, there will be no message generated when a write operation happens, meanwhile the write-protection will be resolved automatically by the kernel. It can be seen as a more accurate version of soft-dirty tracking and it can be different in a few ways:h]hXIn async mode, there will be no message generated when a write operation happens, meanwhile the write-protection will be resolved automatically by the kernel. It can be seen as a more accurate version of soft-dirty tracking and it can be different in a few ways:}(hjZ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj_hhubh block_quote)}(hX- The dirty result will not be affected by vma changes (e.g. vma merging) because the dirty is only tracked by the pte. - It supports range operations by default, so one can enable tracking on any range of memory as long as page aligned. - Dirty information will not get lost if the pte was zapped due to various reasons (e.g. during split of a shmem transparent huge page). - Due to a reverted meaning of soft-dirty (page clean when uffd-wp bit set; dirty when uffd-wp bit cleared), it has different semantics on some of the memory operations. For example: ``MADV_DONTNEED`` on anonymous (or ``MADV_REMOVE`` on a file mapping) will be treated as dirtying of memory by dropping uffd-wp bit during the procedure. h]j`)}(hhh](jG)}(hvThe dirty result will not be affected by vma changes (e.g. vma merging) because the dirty is only tracked by the pte. h]h)}(huThe dirty result will not be affected by vma changes (e.g. vma merging) because the dirty is only tracked by the pte.h]huThe dirty result will not be affected by vma changes (e.g. vma merging) because the dirty is only tracked by the pte.}(hju hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjq ubah}(h]h ]h"]h$]h&]uh1jFhjn ubjG)}(htIt supports range operations by default, so one can enable tracking on any range of memory as long as page aligned. h]h)}(hsIt supports range operations by default, so one can enable tracking on any range of memory as long as page aligned.h]hsIt supports range operations by default, so one can enable tracking on any range of memory as long as page aligned.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jFhjn ubjG)}(hDirty information will not get lost if the pte was zapped due to various reasons (e.g. during split of a shmem transparent huge page). h]h)}(hDirty information will not get lost if the pte was zapped due to various reasons (e.g. during split of a shmem transparent huge page).h]hDirty information will not get lost if the pte was zapped due to various reasons (e.g. during split of a shmem transparent huge page).}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj ubah}(h]h ]h"]h$]h&]uh1jFhjn ubjG)}(hXPDue to a reverted meaning of soft-dirty (page clean when uffd-wp bit set; dirty when uffd-wp bit cleared), it has different semantics on some of the memory operations. For example: ``MADV_DONTNEED`` on anonymous (or ``MADV_REMOVE`` on a file mapping) will be treated as dirtying of memory by dropping uffd-wp bit during the procedure. h]h)}(hXODue to a reverted meaning of soft-dirty (page clean when uffd-wp bit set; dirty when uffd-wp bit cleared), it has different semantics on some of the memory operations. For example: ``MADV_DONTNEED`` on anonymous (or ``MADV_REMOVE`` on a file mapping) will be treated as dirtying of memory by dropping uffd-wp bit during the procedure.h](hDue to a reverted meaning of soft-dirty (page clean when uffd-wp bit set; dirty when uffd-wp bit cleared), it has different semantics on some of the memory operations. For example: }(hj hhhNhNubh)}(h``MADV_DONTNEED``h]h MADV_DONTNEED}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh on anonymous (or }(hj hhhNhNubh)}(h``MADV_REMOVE``h]h MADV_REMOVE}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubhg on a file mapping) will be treated as dirtying of memory by dropping uffd-wp bit during the procedure.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM hj ubah}(h]h ]h"]h$]h&]uh1jFhjn ubeh}(h]h ]h"]h$]h&]jjuh1j_hhhMhjj ubah}(h]h ]h"]h$]h&]uh1jh hhhMhj_hhubh)}(hThe user app can collect the "written/dirty" status by looking up the uffd-wp bit for the pages being interested in /proc/pagemap.h]hThe user app can collect the “written/dirty” status by looking up the uffd-wp bit for the pages being interested in /proc/pagemap.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj_hhubh)}(hXThe page will not be under track of uffd-wp async mode until the page is explicitly write-protected by ``ioctl(UFFDIO_WRITEPROTECT)`` with the mode flag ``UFFDIO_WRITEPROTECT_MODE_WP`` set. Trying to resolve a page fault that was tracked by async mode userfaultfd-wp is invalid.h](hgThe page will not be under track of uffd-wp async mode until the page is explicitly write-protected by }(hj hhhNhNubh)}(h``ioctl(UFFDIO_WRITEPROTECT)``h]hioctl(UFFDIO_WRITEPROTECT)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh with the mode flag }(hj hhhNhNubh)}(h``UFFDIO_WRITEPROTECT_MODE_WP``h]hUFFDIO_WRITEPROTECT_MODE_WP}(hj) hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh_ set. Trying to resolve a page fault that was tracked by async mode userfaultfd-wp is invalid.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj_hhubh)}(hWWhen userfaultfd-wp async mode is used alone, it can be applied to all kinds of memory.h]hWWhen userfaultfd-wp async mode is used alone, it can be applied to all kinds of memory.}(hjA hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj_hhubeh}(h]write-protect-notificationsah ]h"]write protect notificationsah$]h&]uh1hhj!hhhhhKubh)}(hhh](h)}(hMemory Poisioning Emulationh]hMemory Poisioning Emulation}(hjZ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjW hhhhhMubh)}(hXIn response to a fault (either missing or minor), an action userspace can take to "resolve" it is to issue a ``UFFDIO_POISON``. This will cause any future faulters to either get a SIGBUS, or in KVM's case the guest will receive an MCE as if there were hardware memory poisoning.h](hqIn response to a fault (either missing or minor), an action userspace can take to “resolve” it is to issue a }(hjh hhhNhNubh)}(h``UFFDIO_POISON``h]h UFFDIO_POISON}(hjp hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjh ubh. This will cause any future faulters to either get a SIGBUS, or in KVM’s case the guest will receive an MCE as if there were hardware memory poisoning.}(hjh hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM hjW hhubh)}(hXThis is used to emulate hardware memory poisoning. Imagine a VM running on a machine which experiences a real hardware memory error. Later, we live migrate the VM to another physical machine. Since we want the migration to be transparent to the guest, we want that same address range to act as if it was still poisoned, even though it's on a new physical host which ostensibly doesn't have a memory error in the exact same spot.h]hXThis is used to emulate hardware memory poisoning. Imagine a VM running on a machine which experiences a real hardware memory error. Later, we live migrate the VM to another physical machine. Since we want the migration to be transparent to the guest, we want that same address range to act as if it was still poisoned, even though it’s on a new physical host which ostensibly doesn’t have a memory error in the exact same spot.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM%hjW hhubeh}(h]memory-poisioning-emulationah ]h"]memory poisioning emulationah$]h&]uh1hhj!hhhhhMubeh}(h]apiah ]h"]apiah$]h&]uh1hhhhhhhhK3ubh)}(hhh](h)}(hQEMU/KVMh]hQEMU/KVM}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhM-ubh)}(hXQEMU/KVM is using the ``userfaultfd`` syscall to implement postcopy live migration. Postcopy live migration is one form of memory externalization consisting of a virtual machine running with part or all of its memory residing on a different node in the cloud. The ``userfaultfd`` abstraction is generic enough that not a single line of KVM kernel code had to be modified in order to add postcopy live migration to QEMU.h](hQEMU/KVM is using the }(hj hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh syscall to implement postcopy live migration. Postcopy live migration is one form of memory externalization consisting of a virtual machine running with part or all of its memory residing on a different node in the cloud. The }(hj hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh abstraction is generic enough that not a single line of KVM kernel code had to be modified in order to add postcopy live migration to QEMU.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM/hj hhubh)}(hX)Guest async page faults, ``FOLL_NOWAIT`` and all other ``GUP*`` features work just fine in combination with userfaults. Userfaults trigger async page faults in the guest scheduler so those guest processes that aren't waiting for userfaults (i.e. network bound) can keep running in the guest vcpus.h](hGuest async page faults, }(hj hhhNhNubh)}(h``FOLL_NOWAIT``h]h FOLL_NOWAIT}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh and all other }(hj hhhNhNubh)}(h``GUP*``h]hGUP*}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh features work just fine in combination with userfaults. Userfaults trigger async page faults in the guest scheduler so those guest processes that aren’t waiting for userfaults (i.e. network bound) can keep running in the guest vcpus.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM7hj hhubh)}(hIt is generally beneficial to run one pass of precopy live migration just before starting postcopy live migration, in order to avoid generating userfaults for readonly guest regions.h]hIt is generally beneficial to run one pass of precopy live migration just before starting postcopy live migration, in order to avoid generating userfaults for readonly guest regions.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM=hj hhubh)}(hXThe implementation of postcopy live migration currently uses one single bidirectional socket but in the future two different sockets will be used (to reduce the latency of the userfaults to the minimum possible without having to decrease ``/proc/sys/net/ipv4/tcp_wmem``).h](hThe implementation of postcopy live migration currently uses one single bidirectional socket but in the future two different sockets will be used (to reduce the latency of the userfaults to the minimum possible without having to decrease }(hj) hhhNhNubh)}(h``/proc/sys/net/ipv4/tcp_wmem``h]h/proc/sys/net/ipv4/tcp_wmem}(hj1 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj) ubh).}(hj) hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMAhj hhubh)}(hXeThe QEMU in the source node writes all pages that it knows are missing in the destination node, into the socket, and the migration thread of the QEMU running in the destination node runs ``UFFDIO_COPY|ZEROPAGE`` ioctls on the ``userfaultfd`` in order to map the received pages into the guest (``UFFDIO_ZEROCOPY`` is used if the source page was a zero page).h](hThe QEMU in the source node writes all pages that it knows are missing in the destination node, into the socket, and the migration thread of the QEMU running in the destination node runs }(hjI hhhNhNubh)}(h``UFFDIO_COPY|ZEROPAGE``h]hUFFDIO_COPY|ZEROPAGE}(hjQ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjI ubh ioctls on the }(hjI hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjc hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjI ubh4 in order to map the received pages into the guest (}(hjI hhhNhNubh)}(h``UFFDIO_ZEROCOPY``h]hUFFDIO_ZEROCOPY}(hju hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjI ubh- is used if the source page was a zero page).}(hjI hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMFhj hhubh)}(hXA different postcopy thread in the destination node listens with poll() to the ``userfaultfd`` in parallel. When a ``POLLIN`` event is generated after a userfault triggers, the postcopy thread read() from the ``userfaultfd`` and receives the fault address (or ``-EAGAIN`` in case the userfault was already resolved and waken by a ``UFFDIO_COPY|ZEROPAGE`` run by the parallel QEMU migration thread).h](hOA different postcopy thread in the destination node listens with poll() to the }(hj hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh in parallel. When a }(hj hhhNhNubh)}(h ``POLLIN``h]hPOLLIN}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubhT event is generated after a userfault triggers, the postcopy thread read() from the }(hj hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh$ and receives the fault address (or }(hj hhhNhNubh)}(h ``-EAGAIN``h]h-EAGAIN}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh; in case the userfault was already resolved and waken by a }(hj hhhNhNubh)}(h``UFFDIO_COPY|ZEROPAGE``h]hUFFDIO_COPY|ZEROPAGE}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh, run by the parallel QEMU migration thread).}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMLhj hhubh)}(hXAfter the QEMU postcopy thread (running in the destination node) gets the userfault address it writes the information about the missing page into the socket. The QEMU source node receives the information and roughly "seeks" to that page address and continues sending all remaining missing pages from that new page offset. Soon after that (just the time to flush the tcp_wmem queue through the network) the migration thread in the QEMU running in the destination node will receive the page that triggered the userfault and it'll map it as usual with the ``UFFDIO_COPY|ZEROPAGE`` (without actually knowing if it was spontaneously sent by the source or if it was an urgent page requested through a userfault).h](hX/After the QEMU postcopy thread (running in the destination node) gets the userfault address it writes the information about the missing page into the socket. The QEMU source node receives the information and roughly “seeks” to that page address and continues sending all remaining missing pages from that new page offset. Soon after that (just the time to flush the tcp_wmem queue through the network) the migration thread in the QEMU running in the destination node will receive the page that triggered the userfault and it’ll map it as usual with the }(hj hhhNhNubh)}(h``UFFDIO_COPY|ZEROPAGE``h]hUFFDIO_COPY|ZEROPAGE}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh (without actually knowing if it was spontaneously sent by the source or if it was an urgent page requested through a userfault).}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMShj hhubh)}(hXBy the time the userfaults start, the QEMU in the destination node doesn't need to keep any per-page state bitmap relative to the live migration around and a single per-page bitmap has to be maintained in the QEMU running in the source node to know which pages are still missing in the destination node. The bitmap in the source node is checked to find which missing pages to send in round robin and we seek over it when receiving incoming userfaults. After sending each page of course the bitmap is updated accordingly. It's also useful to avoid sending the same page twice (in case the userfault is read by the postcopy thread just before ``UFFDIO_COPY|ZEROPAGE`` runs in the migration thread).h](hXBy the time the userfaults start, the QEMU in the destination node doesn’t need to keep any per-page state bitmap relative to the live migration around and a single per-page bitmap has to be maintained in the QEMU running in the source node to know which pages are still missing in the destination node. The bitmap in the source node is checked to find which missing pages to send in round robin and we seek over it when receiving incoming userfaults. After sending each page of course the bitmap is updated accordingly. It’s also useful to avoid sending the same page twice (in case the userfault is read by the postcopy thread just before }(hj hhhNhNubh)}(h``UFFDIO_COPY|ZEROPAGE``h]hUFFDIO_COPY|ZEROPAGE}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh runs in the migration thread).}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM_hj hhubeh}(h]qemu-kvmah ]h"]qemu/kvmah$]h&]uh1hhhhhhhhM-ubh)}(hhh](h)}(hNon-cooperative userfaultfdh]hNon-cooperative userfaultfd}(hj@ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj= hhhhhMlubh)}(hXWhen the ``userfaultfd`` is monitored by an external manager, the manager must be able to track changes in the process virtual memory layout. Userfaultfd can notify the manager about such changes using the same read(2) protocol as for the page fault notifications. The manager has to explicitly enable these events by setting appropriate bits in ``uffdio_api.features`` passed to ``UFFDIO_API`` ioctl:h](h When the }(hjN hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjV hhhNhNubah}(h]h ]h"]h$>]h&]uh1hhjN ubhXB is monitored by an external manager, the manager must be able to track changes in the process virtual memory layout. Userfaultfd can notify the manager about such changes using the same read(2) protocol as for the page fault notifications. The manager has to explicitly enable these events by setting appropriate bits in }(hjN hhhNhNubh)}(h``uffdio_api.features``h]huffdio_api.features}(hjh hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjN ubh passed to }(hjN hhhNhNubh)}(h``UFFDIO_API``h]h UFFDIO_API}(hjz hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjN ubh ioctl:}(hjN hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMnhj= hhubhdefinition_list)}(hhh](hdefinition_list_item)}(hX>``UFFD_FEATURE_EVENT_FORK`` enable ``userfaultfd`` hooks for fork(). When this feature is enabled, the ``userfaultfd`` context of the parent process is duplicated into the newly created process. The manager receives ``UFFD_EVENT_FORK`` with file descriptor of the new ``userfaultfd`` context in the ``uffd_msg.fork``. h](hterm)}(h``UFFD_FEATURE_EVENT_FORK``h]h)}(hj h]hUFFD_FEATURE_EVENT_FORK}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubah}(h]h ]h"]h$]h&]uh1j hhhMzhj ubh definition)}(hhh]h)}(hX!enable ``userfaultfd`` hooks for fork(). When this feature is enabled, the ``userfaultfd`` context of the parent process is duplicated into the newly created process. The manager receives ``UFFD_EVENT_FORK`` with file descriptor of the new ``userfaultfd`` context in the ``uffd_msg.fork``.h](henable }(hj hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh5 hooks for fork(). When this feature is enabled, the }(hj hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubhb context of the parent process is duplicated into the newly created process. The manager receives }(hj hhhNhNubh)}(h``UFFD_EVENT_FORK``h]hUFFD_EVENT_FORK}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh! with file descriptor of the new }(hj hhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh context in the }(hj hhhNhNubh)}(h``uffd_msg.fork``h]h uffd_msg.fork}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubh.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMvhj ubah}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]uh1j hhhMzhj ubj )}(hX0``UFFD_FEATURE_EVENT_REMAP`` enable notifications about mremap() calls. When the non-cooperative process moves a virtual memory area to a different location, the manager will receive ``UFFD_EVENT_REMAP``. The ``uffd_msg.remap`` will contain the old and new addresses of the area and its original length. h](j )}(h``UFFD_FEATURE_EVENT_REMAP``h]h)}(hj5h]hUFFD_FEATURE_EVENT_REMAP}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj3ubah}(h]h ]h"]h$]h&]uh1j hhhMhj/ubj )}(hhh]h)}(hXenable notifications about mremap() calls. When the non-cooperative process moves a virtual memory area to a different location, the manager will receive ``UFFD_EVENT_REMAP``. The ``uffd_msg.remap`` will contain the old and new addresses of the area and its original length.h](henable notifications about mremap() calls. When the non-cooperative process moves a virtual memory area to a different location, the manager will receive }(hjMhhhNhNubh)}(h``UFFD_EVENT_REMAP``h]hUFFD_EVENT_REMAP}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjMubh. The }(hjMhhhNhNubh)}(h``uffd_msg.remap``h]huffd_msg.remap}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhjMubhL will contain the old and new addresses of the area and its original length.}(hjMhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM}hjJubah}(h]h ]h"]h$]h&]uh1j hj/ubeh}(h]h ]h"]h$]h&]uh1j hhhMhj hhubj )}(hX``UFFD_FEATURE_EVENT_REMOVE`` enable notifications about madvise(MADV_REMOVE) and madvise(MADV_DONTNEED) calls. The event ``UFFD_EVENT_REMOVE`` will be generated upon these calls to madvise(). The ``uffd_msg.remove`` will contain start and end addresses of the removed area. h](j )}(h``UFFD_FEATURE_EVENT_REMOVE``h]h)}(hjh]hUFFD_FEATURE_EVENT_REMOVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1j hhhMhjubj )}(hhh]h)}(henable notifications about madvise(MADV_REMOVE) and madvise(MADV_DONTNEED) calls. The event ``UFFD_EVENT_REMOVE`` will be generated upon these calls to madvise(). The ``uffd_msg.remove`` will contain start and end addresses of the removed area.h](h\enable notifications about madvise(MADV_REMOVE) and madvise(MADV_DONTNEED) calls. The event }(hjhhhNhNubh)}(h``UFFD_EVENT_REMOVE``h]hUFFD_EVENT_REMOVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh6 will be generated upon these calls to madvise(). The }(hjhhhNhNubh)}(h``uffd_msg.remove``h]huffd_msg.remove}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh: will contain start and end addresses of the removed area.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hhhMhj hhubj )}(h``UFFD_FEATURE_EVENT_UNMAP`` enable notifications about memory unmapping. The manager will get ``UFFD_EVENT_UNMAP`` with ``uffd_msg.remove`` containing start and end addresses of the unmapped area. h](j )}(h``UFFD_FEATURE_EVENT_UNMAP``h]h)}(hjh]hUFFD_FEATURE_EVENT_UNMAP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]uh1j hhhMhjubj )}(hhh]h)}(henable notifications about memory unmapping. The manager will get ``UFFD_EVENT_UNMAP`` with ``uffd_msg.remove`` containing start and end addresses of the unmapped area.h](hBenable notifications about memory unmapping. The manager will get }(hjhhhNhNubh)}(h``UFFD_EVENT_UNMAP``h]hUFFD_EVENT_UNMAP}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh with }(hjhhhNhNubh)}(h``uffd_msg.remove``h]huffd_msg.remove}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh9 containing start and end addresses of the unmapped area.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hhhMhj hhubeh}(h]h ]h"]h$]h&]uh1j hj= hhhhhNubh)}(hXgAlthough the ``UFFD_FEATURE_EVENT_REMOVE`` and ``UFFD_FEATURE_EVENT_UNMAP`` are pretty similar, they quite differ in the action expected from the ``userfaultfd`` manager. In the former case, the virtual memory is removed, but the area is not, the area remains monitored by the ``userfaultfd``, and if a page fault occurs in that area it will be delivered to the manager. The proper resolution for such page fault is to zeromap the faulting address. However, in the latter case, when an area is unmapped, either explicitly (with munmap() system call), or implicitly (e.g. during mremap()), the area is removed and in turn the ``userfaultfd`` context for such area disappears too and the manager will not get further userland page faults from the removed area. Still, the notification is required in order to prevent manager from using ``UFFDIO_COPY`` on the unmapped area.h](h Although the }(hjIhhhNhNubh)}(h``UFFD_FEATURE_EVENT_REMOVE``h]hUFFD_FEATURE_EVENT_REMOVE}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIubh and }(hjIhhhNhNubh)}(h``UFFD_FEATURE_EVENT_UNMAP``h]hUFFD_FEATURE_EVENT_UNMAP}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIubhG are pretty similar, they quite differ in the action expected from the }(hjIhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIubht manager. In the former case, the virtual memory is removed, but the area is not, the area remains monitored by the }(hjIhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIubhXM, and if a page fault occurs in that area it will be delivered to the manager. The proper resolution for such page fault is to zeromap the faulting address. However, in the latter case, when an area is unmapped, either explicitly (with munmap() system call), or implicitly (e.g. during mremap()), the area is removed and in turn the }(hjIhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIubh context for such area disappears too and the manager will not get further userland page faults from the removed area. Still, the notification is required in order to prevent manager from using }(hjIhhhNhNubh)}(h``UFFDIO_COPY``h]h UFFDIO_COPY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIubh on the unmapped area.}(hjIhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj= hhubh)}(hXsUnlike userland page faults which have to be synchronous and require explicit or implicit wakeup, all the events are delivered asynchronously and the non-cooperative process resumes execution as soon as manager executes read(). The ``userfaultfd`` manager should carefully synchronize calls to ``UFFDIO_COPY`` with the events processing. To aid the synchronization, the ``UFFDIO_COPY`` ioctl will return ``-ENOSPC`` when the monitored process exits at the time of ``UFFDIO_COPY``, and ``-ENOENT``, when the non-cooperative process has changed its virtual memory layout simultaneously with outstanding ``UFFDIO_COPY`` operation.h](hUnlike userland page faults which have to be synchronous and require explicit or implicit wakeup, all the events are delivered asynchronously and the non-cooperative process resumes execution as soon as manager executes read(). The }(hjhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh/ manager should carefully synchronize calls to }(hjhhhNhNubh)}(h``UFFDIO_COPY``h]h UFFDIO_COPY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh= with the events processing. To aid the synchronization, the }(hjhhhNhNubh)}(h``UFFDIO_COPY``h]h UFFDIO_COPY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh ioctl will return }(hjhhhNhNubh)}(h ``-ENOSPC``h]h-ENOSPC}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh1 when the monitored process exits at the time of }(hjhhhNhNubh)}(h``UFFDIO_COPY``h]h UFFDIO_COPY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh, and }(hjhhhNhNubh)}(h ``-ENOENT``h]h-ENOENT}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubhi, when the non-cooperative process has changed its virtual memory layout simultaneously with outstanding }(hjhhhNhNubh)}(h``UFFDIO_COPY``h]h UFFDIO_COPY}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh operation.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj= hhubh)}(hXThe current asynchronous model of the event delivery is optimal for single threaded non-cooperative ``userfaultfd`` manager implementations. A synchronous event delivery model can be added later as a new ``userfaultfd`` feature to facilitate multithreading enhancements of the non cooperative manager, for example to allow ``UFFDIO_COPY`` ioctls to run in parallel to the event reception. Single threaded implementations should continue to use the current async event delivery model instead.h](hdThe current asynchronous model of the event delivery is optimal for single threaded non-cooperative }(hjOhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjOubhY manager implementations. A synchronous event delivery model can be added later as a new }(hjOhhhNhNubh)}(h``userfaultfd``h]h userfaultfd}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhjOubhh feature to facilitate multithreading enhancements of the non cooperative manager, for example to allow }(hjOhhhNhNubh)}(h``UFFDIO_COPY``h]h UFFDIO_COPY}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjOubh ioctls to run in parallel to the event reception. Single threaded implementations should continue to use the current async event delivery model instead.}(hjOhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj= hhubeh}(h]jah ]h"]non-cooperative userfaultfdah$]h&]uh1hhhhhhhhMl referencedKubeh}(h] userfaultfdah ]h"] userfaultfdah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjerror_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}non-cooperative userfaultfd]j asrefids}nameids}(jjhhjjj j jjjjj\jYjT jQ j j j: j7 jju nametypes}(jhjj jjj\jT j j: juh}(jhhhjjj j!jj2jjjYjjQ j_j jW j7 j jj= u footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.