sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget(/translations/zh_CN/gpu/rfc/i915_vm_bindmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget(/translations/zh_TW/gpu/rfc/i915_vm_bindmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget(/translations/it_IT/gpu/rfc/i915_vm_bindmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget(/translations/ja_JP/gpu/rfc/i915_vm_bindmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget(/translations/ko_KR/gpu/rfc/i915_vm_bindmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget(/translations/sp_SP/gpu/rfc/i915_vm_bindmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhsection)}(hhh](htitle)}(h)I915 VM_BIND feature design and use casesh]h)I915 VM_BIND feature design and use cases}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhB/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind.rsthKubh)}(hhh](h)}(hVM_BIND featureh]hVM_BIND feature}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh paragraph)}(hXDRM_I915_GEM_VM_BIND/UNBIND ioctls allows UMD to bind/unbind GEM buffer objects (BOs) or sections of a BOs at specified GPU virtual addresses on a specified address space (VM). These mappings (also referred to as persistent mappings) will be persistent across multiple GPU submissions (execbuf calls) issued by the UMD, without user having to provide a list of all required mappings during each submission (as required by older execbuf mode).h]hXDRM_I915_GEM_VM_BIND/UNBIND ioctls allows UMD to bind/unbind GEM buffer objects (BOs) or sections of a BOs at specified GPU virtual addresses on a specified address space (VM). These mappings (also referred to as persistent mappings) will be persistent across multiple GPU submissions (execbuf calls) issued by the UMD, without user having to provide a list of all required mappings during each submission (as required by older execbuf mode).}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hzThe VM_BIND/UNBIND calls allow UMDs to request a timeline out fence for signaling the completion of bind/unbind operation.h]hzThe VM_BIND/UNBIND calls allow UMDs to request a timeline out fence for signaling the completion of bind/unbind operation.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hVM_BIND feature is advertised to user via I915_PARAM_VM_BIND_VERSION. User has to opt-in for VM_BIND mode of binding for an address space (VM) during VM creation time via I915_VM_CREATE_FLAGS_USE_VM_BIND extension.h]hVM_BIND feature is advertised to user via I915_PARAM_VM_BIND_VERSION. User has to opt-in for VM_BIND mode of binding for an address space (VM) during VM creation time via I915_VM_CREATE_FLAGS_USE_VM_BIND extension.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hVM_BIND/UNBIND ioctl calls executed on different CPU threads concurrently are not ordered. Furthermore, parts of the VM_BIND/UNBIND operations can be done asynchronously, when valid out fence is specified.h]hVM_BIND/UNBIND ioctl calls executed on different CPU threads concurrently are not ordered. Furthermore, parts of the VM_BIND/UNBIND operations can be done asynchronously, when valid out fence is specified.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hVM_BIND features include:h]hVM_BIND features include:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh bullet_list)}(hhh](h list_item)}(hbMultiple Virtual Address (VA) mappings can map to the same physical pages of an object (aliasing).h]h)}(hbMultiple Virtual Address (VA) mappings can map to the same physical pages of an object (aliasing).h]hbMultiple Virtual Address (VA) mappings can map to the same physical pages of an object (aliasing).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hDVA mapping can map to a partial section of the BO (partial binding).h]h)}(hj1h]hDVA mapping can map to a partial section of the BO (partial binding).}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj/ubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hBSupport capture of persistent mappings in the dump upon GPU error.h]h)}(hjHh]hBSupport capture of persistent mappings in the dump upon GPU error.}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjFubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hHSupport for userptr gem objects (no special uapi is required for this). h]h)}(hGSupport for userptr gem objects (no special uapi is required for this).h]hGSupport for userptr gem objects (no special uapi is required for this).}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj]ubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]bullet*uh1jhhhKhhhhubh)}(hhh](h)}(hTLB flush considerationh]hTLB flush consideration}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hhhhhK"ubh)}(hXThe i915 driver flushes the TLB for each submission and when an object's pages are released. The VM_BIND/UNBIND operation will not do any additional TLB flush. Any VM_BIND mapping added will be in the working set for subsequent submissions on that VM and will not be in the working set for currently running batches (which would require additional TLB flushes, which is not supported).h]hXThe i915 driver flushes the TLB for each submission and when an object’s pages are released. The VM_BIND/UNBIND operation will not do any additional TLB flush. Any VM_BIND mapping added will be in the working set for subsequent submissions on that VM and will not be in the working set for currently running batches (which would require additional TLB flushes, which is not supported).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK#hj}hhubeh}(h]tlb-flush-considerationah ]h"]tlb flush considerationah$]h&]uh1hhhhhhhhK"ubh)}(hhh](h)}(hExecbuf ioctl in VM_BIND modeh]hExecbuf ioctl in VM_BIND mode}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK*ubh)}(hXA VM in VM_BIND mode will not support older execbuf mode of binding. The execbuf ioctl handling in VM_BIND mode differs significantly from the older execbuf2 ioctl (See struct drm_i915_gem_execbuffer2). Hence, a new execbuf3 ioctl has been added to support VM_BIND mode. (See struct drm_i915_gem_execbuffer3). The execbuf3 ioctl will not accept any execlist. Hence, no support for implicit sync. It is expected that the below work will be able to support requirements of object dependency setting in all use cases:h]hXA VM in VM_BIND mode will not support older execbuf mode of binding. The execbuf ioctl handling in VM_BIND mode differs significantly from the older execbuf2 ioctl (See struct drm_i915_gem_execbuffer2). Hence, a new execbuf3 ioctl has been added to support VM_BIND mode. (See struct drm_i915_gem_execbuffer3). The execbuf3 ioctl will not accept any execlist. Hence, no support for implicit sync. It is expected that the below work will be able to support requirements of object dependency setting in all use cases:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK+hjhhubh)}(hQ"dma-buf: Add an API for exporting sync files" (https://lwn.net/Articles/859290/)h](h4“dma-buf: Add an API for exporting sync files” (}(hjhhhNhNubh reference)}(h https://lwn.net/Articles/859290/h]h https://lwn.net/Articles/859290/}(hjhhhNhNubah}(h]h ]h"]h$]h&]refurijuh1jhjubh)}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK4hjhhubh)}(hThe new execbuf3 ioctl only works in VM_BIND mode and the VM_BIND mode only works with execbuf3 ioctl for submission. All BOs mapped on that VM (through VM_BIND call) at the time of execbuf3 call are deemed required for that submission.h]hThe new execbuf3 ioctl only works in VM_BIND mode and the VM_BIND mode only works with execbuf3 ioctl for submission. All BOs mapped on that VM (through VM_BIND call) at the time of execbuf3 call are deemed required for that submission.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK7hjhhubh)}(hX!The execbuf3 ioctl directly specifies the batch addresses instead of as object handles as in execbuf2 ioctl. The execbuf3 ioctl will also not support many of the older features like in/out/submit fences, fence array, default gem context and many more (See struct drm_i915_gem_execbuffer3).h]hX!The execbuf3 ioctl directly specifies the batch addresses instead of as object handles as in execbuf2 ioctl. The execbuf3 ioctl will also not support many of the older features like in/out/submit fences, fence array, default gem context and many more (See struct drm_i915_gem_execbuffer3).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKWhen GPU page faults are supported, the execbuf path do not take any of these locks. There we will simply smash the new batch buffer address into the ring and then tell the scheduler run that. The lock taking only happens from the page fault handler, where we take lock-A in read mode, whichever lock-B we need to find the backing storage (dma_resv lock for gem objects, and hmm/core mm for system allocator) and some additional locks (lock-D) for taking care of page table races. Page fault mode should not need to ever manipulate the vm lists, so won’t ever need lock-C.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK|hjrhhubeh}(h]vm-bind-locking-hierarchyah ]h"]vm_bind locking hierarchyah$]h&]uh1hhhhhhhhK^ubh)}(hhh](h)}(hVM_BIND LRU handlingh]hVM_BIND LRU handling}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj>hhhhhKubh)}(hWe need to ensure VM_BIND mapped objects are properly LRU tagged to avoid performance degradation. We will also need support for bulk LRU movement of VM_BIND objects to avoid additional latencies in execbuf path.h]hWe need to ensure VM_BIND mapped objects are properly LRU tagged to avoid performance degradation. We will also need support for bulk LRU movement of VM_BIND objects to avoid additional latencies in execbuf path.}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj>hhubh)}(hXThe page table pages are similar to VM_BIND mapped objects (See `Evictable page table allocations`_) and are maintained per VM and needs to be pinned in memory when VM is made active (ie., upon an execbuf call with that VM). So, bulk LRU movement of page table pages is also needed.h](h@The page table pages are similar to VM_BIND mapped objects (See }(hj]hhhNhNubj)}(h#`Evictable page table allocations`_h]h Evictable page table allocations}(hjehhhNhNubah}(h]h ]h"]h$]h&]name Evictable page table allocationsj evictable-page-table-allocationsuh1jhj]jKubh) and are maintained per VM and needs to be pinned in memory when VM is made active (ie., upon an execbuf call with that VM). So, bulk LRU movement of page table pages is also needed.}(hj]hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj>hhubeh}(h]vm-bind-lru-handlingah ]h"]vm_bind lru handlingah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hVM_BIND dma_resv usageh]hVM_BIND dma_resv usage}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hX6Fences needs to be added to all VM_BIND mapped objects. During each execbuf submission, they are added with DMA_RESV_USAGE_BOOKKEEP usage to prevent over sync (See enum dma_resv_usage). One can override it with either DMA_RESV_USAGE_READ or DMA_RESV_USAGE_WRITE usage during explicit object dependency setting.h]hX6Fences needs to be added to all VM_BIND mapped objects. During each execbuf submission, they are added with DMA_RESV_USAGE_BOOKKEEP usage to prevent over sync (See enum dma_resv_usage). One can override it with either DMA_RESV_USAGE_READ or DMA_RESV_USAGE_WRITE usage during explicit object dependency setting.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hX Note that DRM_I915_GEM_WAIT and DRM_I915_GEM_BUSY ioctls do not check for DMA_RESV_USAGE_BOOKKEEP usage and hence should not be used for end of batch check. Instead, the execbuf3 out fence should be used for end of batch check (See struct drm_i915_gem_execbuffer3).h]hX Note that DRM_I915_GEM_WAIT and DRM_I915_GEM_BUSY ioctls do not check for DMA_RESV_USAGE_BOOKKEEP usage and hence should not be used for end of batch check. Instead, the execbuf3 out fence should be used for end of batch check (See struct drm_i915_gem_execbuffer3).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXAlso, in VM_BIND mode, use dma-resv apis for determining object activeness (See dma_resv_test_signaled() and dma_resv_wait_timeout()) and do not use the older i915_vma active reference tracking which is deprecated. This should be easier to get it working with the current TTM backend.h]hXAlso, in VM_BIND mode, use dma-resv apis for determining object activeness (See dma_resv_test_signaled() and dma_resv_wait_timeout()) and do not use the older i915_vma active reference tracking which is deprecated. This should be easier to get it working with the current TTM backend.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]jah ]h"]vm_bind dma_resv usageah$]h&]uh1hhhhhhhhK referencedKubh)}(hhh](h)}(h Mesa use caseh]h Mesa use case}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXVM_BIND can potentially reduce the CPU overhead in Mesa (both Vulkan and Iris), hence improving performance of CPU-bound applications. It also allows us to implement Vulkan's Sparse Resources. With increasing GPU hardware performance, reducing CPU overhead becomes more impactful.h]hXVM_BIND can potentially reduce the CPU overhead in Mesa (both Vulkan and Iris), hence improving performance of CPU-bound applications. It also allows us to implement Vulkan’s Sparse Resources. With increasing GPU hardware performance, reducing CPU overhead becomes more impactful.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] mesa-use-caseah ]h"] mesa use caseah$]h&]uh1hhhhhhhhKubeh}(h]vm-bind-featureah ]h"]vm_bind featureah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hOther VM_BIND use casesh]hOther VM_BIND use cases}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hLong running Compute contextsh]hLong running Compute contexts}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhKubh)}(hXUsage of dma-fence expects that they complete in reasonable amount of time. Compute on the other hand can be long running. Hence it is appropriate for compute to use user/memory fence (See `User/Memory Fence`_) and dma-fence usage must be limited to in-kernel consumption only.h](hUsage of dma-fence expects that they complete in reasonable amount of time. Compute on the other hand can be long running. Hence it is appropriate for compute to use user/memory fence (See }(hjhhhNhNubj)}(h`User/Memory Fence`_h]hUser/Memory Fence}(hj$hhhNhNubah}(h]h ]h"]h$]h&]nameUser/Memory Fencejuser-memory-fenceuh1jhjjKubhD) and dma-fence usage must be limited to in-kernel consumption only.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj hhubh)}(hXgWhere GPU page faults are not available, kernel driver upon buffer invalidation will initiate a suspend (preemption) of long running context, finish the invalidation, revalidate the BO and then resume the compute context. This is done by having a per-context preempt fence which is enabled when someone tries to wait on it and triggers the context preemption.h]hXgWhere GPU page faults are not available, kernel driver upon buffer invalidation will initiate a suspend (preemption) of long running context, finish the invalidation, revalidate the BO and then resume the compute context. This is done by having a per-context preempt fence which is enabled when someone tries to wait on it and triggers the context preemption.}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubh)}(hhh](h)}(hUser/Memory Fenceh]hUser/Memory Fence}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjMhhhhhKubh)}(hXMUser/Memory fence is a pair. To signal the user fence, the specified value will be written at the specified virtual address and wakeup the waiting process. User fence can be signaled either by the GPU or kernel async worker (like upon bind completion). User can wait on a user fence with a new user fence wait ioctl.h]hXMUser/Memory fence is a pair. To signal the user fence, the specified value will be written at the specified virtual address and wakeup the waiting process. User fence can be signaled either by the GPU or kernel async worker (like upon bind completion). User can wait on a user fence with a new user fence wait ioctl.}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjMhhubh)}(hPHere is some prior work on this: https://patchwork.freedesktop.org/patch/349417/h](h!Here is some prior work on this: }(hjlhhhNhNubj)}(h/https://patchwork.freedesktop.org/patch/349417/h]h/https://patchwork.freedesktop.org/patch/349417/}(hjthhhNhNubah}(h]h ]h"]h$]h&]refurijvuh1jhjlubeh}(h]h ]h"]h$]h&]uh1hhhhKhjMhhubeh}(h]j4ah ]h"]user/memory fenceah$]h&]uh1hhj hhhhhKjKubh)}(hhh](h)}(hLow Latency Submissionh]hLow Latency Submission}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hAllows compute UMD to directly submit GPU jobs instead of through execbuf ioctl. This is made possible by VM_BIND is not being synchronized against execbuf. VM_BIND allows bind/unbind of mappings required for the directly submitted jobs.h]hAllows compute UMD to directly submit GPU jobs instead of through execbuf ioctl. This is made possible by VM_BIND is not being synchronized against execbuf. VM_BIND allows bind/unbind of mappings required for the directly submitted jobs.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]low-latency-submissionah ]h"]low latency submissionah$]h&]uh1hhj hhhhhKubeh}(h]long-running-compute-contextsah ]h"]long running compute contextsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hDebuggerh]hDebugger}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hWith debug event interface user space process (debugger) is able to keep track of and act upon resources created by another process (debugged) and attached to GPU via vm_bind interface.h]hWith debug event interface user space process (debugger) is able to keep track of and act upon resources created by another process (debugged) and attached to GPU via vm_bind interface.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]debuggerah ]h"]debuggerah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hGPU page faultsh]hGPU page faults}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXpGPU page faults when supported (in future), will only be supported in the VM_BIND mode. While both the older execbuf mode and the newer VM_BIND mode of binding will require using dma-fence to ensure residency, the GPU page faults mode when supported, will not use any dma-fence as residency is purely managed by installing and removing/invalidating page table entries.h]hXpGPU page faults when supported (in future), will only be supported in the VM_BIND mode. While both the older execbuf mode and the newer VM_BIND mode of binding will require using dma-fence to ensure residency, the GPU page faults mode when supported, will not use any dma-fence as residency is purely managed by installing and removing/invalidating page table entries.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]gpu-page-faultsah ]h"]gpu page faultsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hPage level hints settingsh]hPage level hints settings}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhKubh)}(hVM_BIND allows any hints setting per mapping instead of per BO. Possible hints include placement and atomicity. Sub-BO level placement hint will be even more relevant with upcoming GPU on-demand page fault support.h]hVM_BIND allows any hints setting per mapping instead of per BO. Possible hints include placement and atomicity. Sub-BO level placement hint will be even more relevant with upcoming GPU on-demand page fault support.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubeh}(h]page-level-hints-settingsah ]h"]page level hints settingsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hPage level Cache/CLOS settingsh]hPage level Cache/CLOS settings}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hhhhhKubh)}(hAVM_BIND allows cache/CLOS settings per mapping instead of per BO.h]hAVM_BIND allows cache/CLOS settings per mapping instead of per BO.}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubeh}(h]page-level-cache-clos-settingsah ]h"]page level cache/clos settingsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(h Evictable page table allocationsh]h Evictable page table allocations}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj[hhhhhKubh)}(hX/Make pagetable allocations evictable and manage them similar to VM_BIND mapped objects. Page table pages are similar to persistent mappings of a VM (difference here are that the page table pages will not have an i915_vma structure and after swapping pages back in, parent page link needs to be updated).h]hX/Make pagetable allocations evictable and manage them similar to VM_BIND mapped objects. Page table pages are similar to persistent mappings of a VM (difference here are that the page table pages will not have an i915_vma structure and after swapping pages back in, parent page link needs to be updated).}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj[hhubeh}(h]juah ]h"] evictable page table allocationsah$]h&]uh1hhjhhhhhKjKubh)}(hhh](h)}(h#Shared Virtual Memory (SVM) supporth]h#Shared Virtual Memory (SVM) support}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hVM_BIND interface can be used to map system memory directly (without gem BO abstraction) using the HMM interface. SVM is only supported with GPU page faults enabled.h]hVM_BIND interface can be used to map system memory directly (without gem BO abstraction) using the HMM interface. SVM is only supported with GPU page faults enabled.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]jah ]h"]#shared virtual memory (svm) supportah$]h&]uh1hhjhhhhhKjKubeh}(h]other-vm-bind-use-casesah ]h"]other vm_bind use casesah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h VM_BIND UAPIh]h VM_BIND UAPI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubhtarget)}(h.. _I915_PARAM_VM_BIND_VERSION:h]h}(h]h ]h"]h$]h&]ji915-param-vm-bind-versionuh1jhKhjhhhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hubh)}(h**I915_PARAM_VM_BIND_VERSION**h]hstrong)}(hjh]hI915_PARAM_VM_BIND_VERSION}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]jah ]h"]i915_param_vm_bind_versionah$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhexpect_referenced_by_name}jjsexpect_referenced_by_id}jjsubh)}(hIVM_BIND feature version supported. See typedef drm_i915_getparam_t param.h]hIVM_BIND feature version supported. See typedef drm_i915_getparam_t param.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubh)}(heSpecifies the VM_BIND feature version supported. The following versions of VM_BIND have been defined:h]heSpecifies the VM_BIND feature version supported. The following versions of VM_BIND have been defined:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK hjhhubh)}(h0: No VM_BIND support.h]h0: No VM_BIND support.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK hjhhubhdefinition_list)}(hhh](hdefinition_list_item)}(h1: In VM_UNBIND calls, the UMD must specify the exact mappings created previously with VM_BIND, the ioctl will not support unbinding multiple mappings or splitting them. Similarly, VM_BIND calls will not replace any existing mappings. h](hterm)}(hF1: In VM_UNBIND calls, the UMD must specify the exact mappings createdh]hF1: In VM_UNBIND calls, the UMD must specify the exact mappings created}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhj!ubh definition)}(hhh]h)}(hpreviously with VM_BIND, the ioctl will not support unbinding multiple mappings or splitting them. Similarly, VM_BIND calls will not replace any existing mappings.h]hpreviously with VM_BIND, the ioctl will not support unbinding multiple mappings or splitting them. Similarly, VM_BIND calls will not replace any existing mappings.}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhj8ubah}(h]h ]h"]h$]h&]uh1j6hj!ubeh}(h]h ]h"]h$]h&]uh1jhj5hKhjubj )}(h2: The restrictions on unbinding partial or multiple mappings is lifted, Similarly, binding will replace any mappings in the given range. h](j&)}(h@2: The restrictions on unbinding partial or multiple mappings ish]h@2: The restrictions on unbinding partial or multiple mappings is}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjVubj7)}(hhh]h)}(hHlifted, Similarly, binding will replace any mappings in the given range.h]hHlifted, Similarly, binding will replace any mappings in the given range.}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhKhjiubah}(h]h ]h"]h$]h&]uh1j6hjVubeh}(h]h ]h"]h$]h&]uh1jhjhhKhjubeh}(h]h ]h"]h$]h&]uh1jhjhhhjhNubh)}(hBSee struct drm_i915_gem_vm_bind and struct drm_i915_gem_vm_unbind.h]hBSee struct drm_i915_gem_vm_bind and struct drm_i915_gem_vm_unbind.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubj)}(h%.. _I915_VM_CREATE_FLAGS_USE_VM_BIND:h]h}(h]h ]h"]h$]h&]j i915-vm-create-flags-use-vm-binduh1jhKhjhhhjubh)}(h$**I915_VM_CREATE_FLAGS_USE_VM_BIND**h]j)}(hjh]h I915_VM_CREATE_FLAGS_USE_VM_BIND}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]jah ]h"] i915_vm_create_flags_use_vm_bindah$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhj}jjsj}jjsubh)}(hhFlag to opt-in for VM_BIND mode of binding during VM creation. See struct drm_i915_gem_vm_control flags.h]hhFlag to opt-in for VM_BIND mode of binding during VM creation. See struct drm_i915_gem_vm_control flags.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubh)}(hThe older execbuf2 ioctl will not support VM_BIND mode of operation. For VM_BIND mode, we have new execbuf3 ioctl which will not accept any execlist (See struct drm_i915_gem_execbuffer3 for more details).h]hThe older execbuf2 ioctl will not support VM_BIND mode of operation. For VM_BIND mode, we have new execbuf3 ioctl which will not accept any execlist (See struct drm_i915_gem_execbuffer3 for more details).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK!hjhhubhindex)}(hhh]h}(h]h ]h"]h$]h&]entries](single&drm_i915_gem_timeline_fence (C struct)c.drm_i915_gem_timeline_fencehNtauh1jhjhhhjhNubhdesc)}(hhh](hdesc_signature)}(hdrm_i915_gem_timeline_fenceh]hdesc_signature_line)}(h"struct drm_i915_gem_timeline_fenceh](hdesc_sig_keyword)}(hstructh]hstruct}(hjhhhNhNubah}(h]h ]kah"]h$]h&]uh1jhjhhhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK'ubhdesc_sig_space)}(h h]h }(hjhhhNhNubah}(h]h ]wah"]h$]h&]uh1jhjhhhjhK'ubh desc_name)}(hdrm_i915_gem_timeline_fenceh]h desc_sig_name)}(hjh]hdrm_i915_gem_timeline_fence}(hj,hhhNhNubah}(h]h ]nah"]h$]h&]uh1j*hj&ubah}(h]h ](sig-namedescnameeh"]h$]h&] xml:spacepreserveuh1j$hjhhhjhK'ubeh}(h]h ]h"]h$]h&]jBjC add_permalinkuh1jsphinx_line_type declaratorhjhhhjhK'ubah}(h]jah ](sig sig-objecteh"]h$]h&] is_multiline _toc_parts) _toc_namehuh1jhjhK'hjhhubh desc_content)}(hhh]h)}(h"An input or output timeline fence.h]h"An input or output timeline fence.}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK3hjZhhubah}(h]h ]h"]h$]h&]uh1jXhjhhhjhK'ubeh}(h]h ](cstructeh"]h$]h&]domainjuobjtypejvdesctypejvnoindex noindexentrynocontentsentryuh1jhhhjhjhNubh container)}(hX**Definition**:: struct drm_i915_gem_timeline_fence { __u32 handle; __u32 flags; #define I915_TIMELINE_FENCE_WAIT (1 << 0); #define I915_TIMELINE_FENCE_SIGNAL (1 << 1); #define __I915_TIMELINE_FENCE_UNKNOWN_FLAGS (-(I915_TIMELINE_FENCE_SIGNAL << 1)); __u64 value; }; **Members** ``handle`` User's handle for a drm_syncobj to wait on or signal. ``flags`` Supported flags are: I915_TIMELINE_FENCE_WAIT: Wait for the input fence before the operation. I915_TIMELINE_FENCE_SIGNAL: Return operation completion fence as output. ``value`` A point in the timeline. Value must be 0 for a binary drm_syncobj. A Value of 0 for a timeline drm_syncobj is invalid as it turns a drm_syncobj into a binary one.h](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK7hjubh literal_block)}(hXstruct drm_i915_gem_timeline_fence { __u32 handle; __u32 flags; #define I915_TIMELINE_FENCE_WAIT (1 << 0); #define I915_TIMELINE_FENCE_SIGNAL (1 << 1); #define __I915_TIMELINE_FENCE_UNKNOWN_FLAGS (-(I915_TIMELINE_FENCE_SIGNAL << 1)); __u64 value; };h]hXstruct drm_i915_gem_timeline_fence { __u32 handle; __u32 flags; #define I915_TIMELINE_FENCE_WAIT (1 << 0); #define I915_TIMELINE_FENCE_SIGNAL (1 << 1); #define __I915_TIMELINE_FENCE_UNKNOWN_FLAGS (-(I915_TIMELINE_FENCE_SIGNAL << 1)); __u64 value; };}hjsbah}(h]h ]h"]h$]h&]jBjCuh1jhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK9hjubh)}(h **Members**h]j)}(hjh]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKBhjubj)}(hhh](j )}(hA``handle`` User's handle for a drm_syncobj to wait on or signal. h](j&)}(h ``handle``h]hliteral)}(hjh]hhandle}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK;hjubj7)}(hhh]h)}(h5User's handle for a drm_syncobj to wait on or signal.h]h7User’s handle for a drm_syncobj to wait on or signal.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhK;hjubah}(h]h ]h"]h$]h&]uh1j6hjubeh}(h]h ]h"]h$]h&]uh1jhjhK;hjubj )}(h``flags`` Supported flags are: I915_TIMELINE_FENCE_WAIT: Wait for the input fence before the operation. I915_TIMELINE_FENCE_SIGNAL: Return operation completion fence as output. h](j&)}(h ``flags``h]j)}(hjh]hflags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKFhj ubj7)}(hhh](h)}(hSupported flags are:h]hSupported flags are:}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK@hj&ubh)}(hHI915_TIMELINE_FENCE_WAIT: Wait for the input fence before the operation.h]hHI915_TIMELINE_FENCE_WAIT: Wait for the input fence before the operation.}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKBhj&ubh)}(hHI915_TIMELINE_FENCE_SIGNAL: Return operation completion fence as output.h]hHI915_TIMELINE_FENCE_SIGNAL: Return operation completion fence as output.}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKEhj&ubeh}(h]h ]h"]h$]h&]uh1j6hj ubeh}(h]h ]h"]h$]h&]uh1jhj%hKFhjubj )}(h``value`` A point in the timeline. Value must be 0 for a binary drm_syncobj. A Value of 0 for a timeline drm_syncobj is invalid as it turns a drm_syncobj into a binary one.h](j&)}(h ``value``h]j)}(hjhh]hvalue}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKPhjbubj7)}(hhh]h)}(hA point in the timeline. Value must be 0 for a binary drm_syncobj. A Value of 0 for a timeline drm_syncobj is invalid as it turns a drm_syncobj into a binary one.h]hA point in the timeline. Value must be 0 for a binary drm_syncobj. A Value of 0 for a timeline drm_syncobj is invalid as it turns a drm_syncobj into a binary one.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKNhj~ubah}(h]h ]h"]h$]h&]uh1j6hjbubeh}(h]h ]h"]h$]h&]uh1jhj}hKPhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhjhNubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKThjhhubh)}(h2The operation will wait for input fence to signal.h]h2The operation will wait for input fence to signal.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK4hjhhubh)}(hQThe returned output fence will be signaled after the completion of the operation.h]hQThe returned output fence will be signaled after the completion of the operation.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhK6hjhhubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdrm_i915_gem_vm_bind (C struct)c.drm_i915_gem_vm_bindhNtauh1jhjhhhjhNubj)}(hhh](j)}(hdrm_i915_gem_vm_bindh]j)}(hstruct drm_i915_gem_vm_bindh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKeh"]h$]h&]jBjCuh1j$hjhhhj hKeh"]h$]h&]jBjCuh1j$hj hhhj hKpubeh}(h]h ]h"]h$]h&]jBjCjJuh1jjKjLhj hhhj hKpubah}(h]j ah ](jPjQeh"]h$]h&]jUjV)jWhuh1jhj hKphj hhubjY)}(hhh]h)}(hVA to object mapping to unbind.h]hVA to object mapping to unbind.}(hjG hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjD hhubah}(h]h ]h"]h$]h&]uh1jXhj hhhj hKpubeh}(h]h ](justructeh"]h$]h&]jzjuj{j_ j|j_ j}j~juh1jhhhjhjhNubj)}(hX**Definition**:: struct drm_i915_gem_vm_unbind { __u32 vm_id; __u32 rsvd; __u64 start; __u64 length; __u64 flags; struct drm_i915_gem_timeline_fence fence; __u64 extensions; }; **Members** ``vm_id`` VM (address space) id to bind ``rsvd`` Reserved, MBZ ``start`` Virtual Address start to unbind ``length`` Length of mapping to unbind ``flags`` Currently reserved, MBZ. Note that **fence** carries its own flags. ``fence`` Timeline fence for unbind completion signaling. Timeline fence is of format struct drm_i915_gem_timeline_fence. It is an out fence, hence using I915_TIMELINE_FENCE_WAIT flag is invalid, and an error will be returned. If I915_TIMELINE_FENCE_SIGNAL flag is not set, then out fence is not requested and unbinding is completed synchronously. ``extensions`` Zero-terminated chain of extensions. For future extensions. See struct i915_user_extension.h](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjk hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjg ubh:}(hjg hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjc ubj)}(hstruct drm_i915_gem_vm_unbind { __u32 vm_id; __u32 rsvd; __u64 start; __u64 length; __u64 flags; struct drm_i915_gem_timeline_fence fence; __u64 extensions; };h]hstruct drm_i915_gem_vm_unbind { __u32 vm_id; __u32 rsvd; __u64 start; __u64 length; __u64 flags; struct drm_i915_gem_timeline_fence fence; __u64 extensions; };}hj sbah}(h]h ]h"]h$]h&]jBjCuh1jhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjc ubh)}(h **Members**h]j)}(hj h]hMembers}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjc ubj)}(hhh](j )}(h(``vm_id`` VM (address space) id to bind h](j&)}(h ``vm_id``h]j)}(hj h]hvm_id}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhj ubj7)}(hhh]h)}(hVM (address space) id to bindh]hVM (address space) id to bind}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1j6hj ubeh}(h]h ]h"]h$]h&]uh1jhj hKhj ubj )}(h``rsvd`` Reserved, MBZ h](j&)}(h``rsvd``h]j)}(hj h]hrsvd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhj ubj7)}(hhh]h)}(h Reserved, MBZh]h Reserved, MBZ}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1j6hj ubeh}(h]h ]h"]h$]h&]uh1jhjhKhj ubj )}(h*``start`` Virtual Address start to unbind h](j&)}(h ``start``h]j)}(hj&h]hstart}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$ubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhj ubj7)}(hhh]h)}(hVirtual Address start to unbindh]hVirtual Address start to unbind}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj;hKhj<ubah}(h]h ]h"]h$]h&]uh1j6hj ubeh}(h]h ]h"]h$]h&]uh1jhj;hKhj ubj )}(h'``length`` Length of mapping to unbind h](j&)}(h ``length``h]j)}(hj_h]hlength}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjYubj7)}(hhh]h)}(hLength of mapping to unbindh]hLength of mapping to unbind}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjthKhjuubah}(h]h ]h"]h$]h&]uh1j6hjYubeh}(h]h ]h"]h$]h&]uh1jhjthKhj ubj )}(hO``flags`` Currently reserved, MBZ. Note that **fence** carries its own flags. h](j&)}(h ``flags``h]j)}(hjh]hflags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubj7)}(hhh](h)}(hCurrently reserved, MBZ.h]hCurrently reserved, MBZ.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubh)}(h*Note that **fence** carries its own flags.h](h Note that }(hjhhhNhNubj)}(h **fence**h]hfence}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh carries its own flags.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhKhjubeh}(h]h ]h"]h$]h&]uh1j6hjubeh}(h]h ]h"]h$]h&]uh1jhjhKhj ubj )}(hX_``fence`` Timeline fence for unbind completion signaling. Timeline fence is of format struct drm_i915_gem_timeline_fence. It is an out fence, hence using I915_TIMELINE_FENCE_WAIT flag is invalid, and an error will be returned. If I915_TIMELINE_FENCE_SIGNAL flag is not set, then out fence is not requested and unbinding is completed synchronously. h](j&)}(h ``fence``h]j)}(hjh]hfence}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubj7)}(hhh](h)}(h/Timeline fence for unbind completion signaling.h]h/Timeline fence for unbind completion signaling.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubh)}(h?Timeline fence is of format struct drm_i915_gem_timeline_fence.h]h?Timeline fence is of format struct drm_i915_gem_timeline_fence.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubh)}(hhIt is an out fence, hence using I915_TIMELINE_FENCE_WAIT flag is invalid, and an error will be returned.h]hhIt is an out fence, hence using I915_TIMELINE_FENCE_WAIT flag is invalid, and an error will be returned.}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubh)}(hxIf I915_TIMELINE_FENCE_SIGNAL flag is not set, then out fence is not requested and unbinding is completed synchronously.h]hxIf I915_TIMELINE_FENCE_SIGNAL flag is not set, then out fence is not requested and unbinding is completed synchronously.}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubeh}(h]h ]h"]h$]h&]uh1j6hjubeh}(h]h ]h"]h$]h&]uh1jhjhKhj ubj )}(hk``extensions`` Zero-terminated chain of extensions. For future extensions. See struct i915_user_extension.h](j&)}(h``extensions``h]j)}(hjYh]h extensions}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjWubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjSubj7)}(hhh](h)}(h$Zero-terminated chain of extensions.h]h$Zero-terminated chain of extensions.}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjoubh)}(h6For future extensions. See struct i915_user_extension.h]h6For future extensions. See struct i915_user_extension.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjoubeh}(h]h ]h"]h$]h&]uh1j6hjSubeh}(h]h ]h"]h$]h&]uh1jhjnhKhj ubeh}(h]h ]h"]h$]h&]uh1jhjc ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhjhNubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubh)}(hXThis structure is passed to VM_UNBIND ioctl and specifies the GPU virtual address (VA) range that should be unbound from the device page table of the specified address space (VM). VM_UNBIND will force unbind the specified range from device page table without waiting for any GPU job to complete. It is UMDs responsibility to ensure the mapping is no longer in use before calling VM_UNBIND.h]hXThis structure is passed to VM_UNBIND ioctl and specifies the GPU virtual address (VA) range that should be unbound from the device page table of the specified address space (VM). VM_UNBIND will force unbind the specified range from device page table without waiting for any GPU job to complete. It is UMDs responsibility to ensure the mapping is no longer in use before calling VM_UNBIND.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubh)}(hVIf the specified mapping is not found, the ioctl will simply return without any error.h]hVIf the specified mapping is not found, the ioctl will simply return without any error.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubh)}(hVM_BIND/UNBIND ioctl calls executed on different CPU threads concurrently are not ordered. Furthermore, parts of the VM_UNBIND operation can be done asynchronously, if valid **fence** is specified.h](hVM_BIND/UNBIND ioctl calls executed on different CPU threads concurrently are not ordered. Furthermore, parts of the VM_UNBIND operation can be done asynchronously, if valid }(hjhhhNhNubj)}(h **fence**h]hfence}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is specified.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j#drm_i915_gem_execbuffer3 (C struct)c.drm_i915_gem_execbuffer3hNtauh1jhjhhhjhNubj)}(hhh](j)}(hdrm_i915_gem_execbuffer3h]j)}(hstruct drm_i915_gem_execbuffer3h](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKubj)}(h h]h }(hj'hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhj&hKubj%)}(hdrm_i915_gem_execbuffer3h]j+)}(hjh]hdrm_i915_gem_execbuffer3}(hj9hhhNhNubah}(h]h ]j6ah"]h$]h&]uh1j*hj5ubah}(h]h ](j=j>eh"]h$]h&]jBjCuh1j$hjhhhj&hKubeh}(h]h ]h"]h$]h&]jBjCjJuh1jjKjLhjhhhj&hKubah}(h]j ah ](jPjQeh"]h$]h&]jUjV)jWhuh1jhj&hKhjhhubjY)}(hhh]h)}(h-Structure for DRM_I915_GEM_EXECBUFFER3 ioctl.h]h-Structure for DRM_I915_GEM_EXECBUFFER3 ioctl.}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjXhhubah}(h]h ]h"]h$]h&]uh1jXhjhhhj&hKubeh}(h]h ](justructeh"]h$]h&]jzjuj{jsj|jsj}j~juh1jhhhjhjhNubj)}(hX**Definition**:: struct drm_i915_gem_execbuffer3 { __u32 ctx_id; __u32 engine_idx; __u64 batch_address; __u64 flags; __u32 rsvd1; __u32 fence_count; __u64 timeline_fences; __u64 rsvd2; __u64 extensions; }; **Members** ``ctx_id`` Context id Only contexts with user engine map are allowed. ``engine_idx`` Engine index An index in the user engine map of the context specified by **ctx_id**. ``batch_address`` Batch gpu virtual address/es. For normal submission, it is the gpu virtual address of the batch buffer. For parallel submission, it is a pointer to an array of batch buffer gpu virtual addresses with array size equal to the number of (parallel) engines involved in that submission (See struct i915_context_engines_parallel_submit). ``flags`` Currently reserved, MBZ ``rsvd1`` Reserved, MBZ ``fence_count`` Number of fences in **timeline_fences** array. ``timeline_fences`` Pointer to an array of timeline fences. Timeline fences are of format struct drm_i915_gem_timeline_fence. ``rsvd2`` Reserved, MBZ ``extensions`` Zero-terminated chain of extensions. For future extensions. See struct i915_user_extension.h](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubh:}(hj{hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjwubj)}(hstruct drm_i915_gem_execbuffer3 { __u32 ctx_id; __u32 engine_idx; __u64 batch_address; __u64 flags; __u32 rsvd1; __u32 fence_count; __u64 timeline_fences; __u64 rsvd2; __u64 extensions; };h]hstruct drm_i915_gem_execbuffer3 { __u32 ctx_id; __u32 engine_idx; __u64 batch_address; __u64 flags; __u32 rsvd1; __u32 fence_count; __u64 timeline_fences; __u64 rsvd2; __u64 extensions; };}hjsbah}(h]h ]h"]h$]h&]jBjCuh1jhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjwubh)}(h **Members**h]j)}(hjh]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjwubj)}(hhh](j )}(hG``ctx_id`` Context id Only contexts with user engine map are allowed. h](j&)}(h ``ctx_id``h]j)}(hjh]hctx_id}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubj7)}(hhh](h)}(h Context idh]h Context id}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubh)}(h/Only contexts with user engine map are allowed.h]h/Only contexts with user engine map are allowed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubeh}(h]h ]h"]h$]h&]uh1j6hjubeh}(h]h ]h"]h$]h&]uh1jhjhKhjubj )}(he``engine_idx`` Engine index An index in the user engine map of the context specified by **ctx_id**. h](j&)}(h``engine_idx``h]j)}(hjh]h engine_idx}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhj ubj7)}(hhh](h)}(h Engine indexh]h Engine index}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhj&ubh)}(hGAn index in the user engine map of the context specified by **ctx_id**.h](hhhhNhNubj)}(h**timeline_fences**h]htimeline_fences}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubh array.}(hj>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj:hKhj;ubah}(h]h ]h"]h$]h&]uh1j6hjubeh}(h]h ]h"]h$]h&]uh1jhj:hKhjubj )}(h``timeline_fences`` Pointer to an array of timeline fences. Timeline fences are of format struct drm_i915_gem_timeline_fence. h](j&)}(h``timeline_fences``h]j)}(hjph]htimeline_fences}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjjubj7)}(hhh](h)}(h'Pointer to an array of timeline fences.h]h'Pointer to an array of timeline fences.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjubh)}(hATimeline fences are of format struct drm_i915_gem_timeline_fence.h]hATimeline fences are of format struct drm_i915_gem_timeline_fence.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubeh}(h]h ]h"]h$]h&]uh1j6hjjubeh}(h]h ]h"]h$]h&]uh1jhjhMhjubj )}(h``rsvd2`` Reserved, MBZ h](j&)}(h ``rsvd2``h]j)}(hjh]hrsvd2}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubj7)}(hhh]h)}(h Reserved, MBZh]h Reserved, MBZ}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1j6hjubeh}(h]h ]h"]h$]h&]uh1jhjhKhjubj )}(hk``extensions`` Zero-terminated chain of extensions. For future extensions. See struct i915_user_extension.h](j&)}(h``extensions``h]j)}(hjh]h extensions}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjubj7)}(hhh](h)}(h$Zero-terminated chain of extensions.h]h$Zero-terminated chain of extensions.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjubh)}(h6For future extensions. See struct i915_user_extension.h]h6For future extensions. See struct i915_user_extension.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjubeh}(h]h ]h"]h$]h&]uh1j6hjubeh}(h]h ]h"]h$]h&]uh1jhjhMhjubeh}(h]h ]h"]h$]h&]uh1jhjwubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhjhNubh)}(h**Description**h]j)}(hjCh]h Description}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjhhubh)}(hDRM_I915_GEM_EXECBUFFER3 ioctl only works in VM_BIND mode and VM_BIND mode only works with this ioctl for submission. See I915_VM_CREATE_FLAGS_USE_VM_BIND.h]hDRM_I915_GEM_EXECBUFFER3 ioctl only works in VM_BIND mode and VM_BIND mode only works with this ioctl for submission. See I915_VM_CREATE_FLAGS_USE_VM_BIND.}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j-drm_i915_gem_create_ext_vm_private (C struct)$c.drm_i915_gem_create_ext_vm_privatehNtauh1jhjhhhjhNubj)}(hhh](j)}(h"drm_i915_gem_create_ext_vm_privateh]j)}(h)struct drm_i915_gem_create_ext_vm_privateh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj}hhhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj}hhhjhKubj%)}(h"drm_i915_gem_create_ext_vm_privateh]j+)}(hj{h]h"drm_i915_gem_create_ext_vm_private}(hjhhhNhNubah}(h]h ]j6ah"]h$]h&]uh1j*hjubah}(h]h ](j=j>eh"]h$]h&]jBjCuh1j$hj}hhhjhKubeh}(h]h ]h"]h$]h&]jBjCjJuh1jjKjLhjyhhhjhKubah}(h]jtah ](jPjQeh"]h$]h&]jUjV)jWhuh1jhjhKhjvhhubjY)}(hhh]h)}(h9Extension to make the object private to the specified VM.h]h9Extension to make the object private to the specified VM.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjhhubah}(h]h ]h"]h$]h&]uh1jXhjvhhhjhKubeh}(h]h ](justructeh"]h$]h&]jzjuj{jj|jj}j~juh1jhhhjhjhNubj)}(hX5**Definition**:: struct drm_i915_gem_create_ext_vm_private { #define I915_GEM_CREATE_EXT_VM_PRIVATE 2; struct i915_user_extension base; __u32 vm_id; }; **Members** ``base`` Extension link. See struct i915_user_extension. ``vm_id`` Id of the VM to which the object is privateh](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjubj)}(hstruct drm_i915_gem_create_ext_vm_private { #define I915_GEM_CREATE_EXT_VM_PRIVATE 2; struct i915_user_extension base; __u32 vm_id; };h]hstruct drm_i915_gem_create_ext_vm_private { #define I915_GEM_CREATE_EXT_VM_PRIVATE 2; struct i915_user_extension base; __u32 vm_id; };}hjsbah}(h]h ]h"]h$]h&]jBjCuh1jhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjubh)}(h **Members**h]j)}(hjh]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhM"hjubj)}(hhh](j )}(h9``base`` Extension link. See struct i915_user_extension. h](j&)}(h``base``h]j)}(hj0h]hbase}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhj*ubj7)}(hhh]h)}(h/Extension link. See struct i915_user_extension.h]h/Extension link. See struct i915_user_extension.}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjEhMhjFubah}(h]h ]h"]h$]h&]uh1j6hj*ubeh}(h]h ]h"]h$]h&]uh1jhjEhMhj'ubj )}(h5``vm_id`` Id of the VM to which the object is privateh](j&)}(h ``vm_id``h]j)}(hjih]hvm_id}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&]uh1j%hj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjcubj7)}(hhh]h)}(h+Id of the VM to which the object is privateh]h+Id of the VM to which the object is private}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjubah}(h]h ]h"]h$]h&]uh1j6hjcubeh}(h]h ]h"]h$]h&]uh1jhj~hKhj'ubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhjhNubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhKhjhhubh)}(h#See struct drm_i915_gem_create_ext.h]h#See struct drm_i915_gem_create_ext.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/var/lib/git/docbuild/linux/Documentation/gpu/rfc/i915_vm_bind:245: ./Documentation/gpu/rfc/i915_vm_bind.hhMhjhhubeh}(h] vm-bind-uapiah ]h"] vm_bind uapiah$]h&]uh1hhhhhhhhKubeh}(h])i915-vm-bind-feature-design-and-use-casesah ]h"])i915 vm_bind feature design and use casesah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjerror_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}(vm_bind dma_resv usage]j a#shared virtual memory (svm) support]ja evictable page table allocations]jeauser/memory fence]j$aurefids}(j]jaj]jaunameids}(jjjjjjj:j7jojlj;j8jjjjjjjjjjjj4jjjjj jj1j.jXjUj~jujjjjjjjju nametypes}(jjjj:joj;jjjjjjjjj j1jXj~jjjjuh}(jhjhjj}j7jjlj=j8jrjj>jjjjjjjj j4jMjjjjjjj.j jUj4juj[jjjjjjjjjjjjj j j jjtjyu footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages](hsystem_message)}(hhh]h)}(hhh]h@Hyperlink target "i915-param-vm-bind-version" is not referenced.}hjwsbah}(h]h ]h"]h$]h&]uh1hhjtubah}(h]h ]h"]h$]h&]levelKtypeINFOsourcejlineKuh1jrubjs)}(hhh]h)}(hhh]hFHyperlink target "i915-vm-create-flags-use-vm-bind" is not referenced.}hjsbah}(h]h ]h"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]levelKtypejsourcejlineKuh1jrube transformerN include_log] decorationNhhub.