€•_ŒŒsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ /translations/zh_CN/arch/x86/mds”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/zh_TW/arch/x86/mds”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/it_IT/arch/x86/mds”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/ja_JP/arch/x86/mds”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/ko_KR/arch/x86/mds”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/sp_SP/arch/x86/mds”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒ1Microarchitectural Data Sampling (MDS) mitigation”h]”hŒ1Microarchitectural Data Sampling (MDS) mitigation”…””}”(hh¨hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hh£hžhhŸŒ:/var/lib/git/docbuild/linux/Documentation/arch/x86/mds.rst”h KubhŒtarget”“”)”}”(hŒ.. _mds:”h]”h}”(h]”h ]”h"]”h$]”h&]”Œrefid”Œmds”uh1h·h Khh£hžhhŸh¶ubh¢)”}”(hhh]”(h§)”}”(hŒOverview”h]”hŒOverview”…””}”(hhÈhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hhÅhžhhŸh¶h KubhŒ paragraph”“”)”}”(hŒMicroarchitectural Data Sampling (MDS) is a family of side channel attacks on internal buffers in Intel CPUs. The variants are:”h]”hŒMicroarchitectural Data Sampling (MDS) is a family of side channel attacks on internal buffers in Intel CPUs. The variants are:”…””}”(hhØhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K hhÅhžhubhŒ block_quote”“”)”}”(hX&- Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126) - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130) - Microarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127) - Microarchitectural Data Sampling Uncacheable Memory (MDSUM) (CVE-2019-11091) ”h]”hŒ bullet_list”“”)”}”(hhh]”(hŒ list_item”“”)”}”(hŒFMicroarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)”h]”h×)”}”(hhõh]”hŒFMicroarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)”…””}”(hh÷hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K hhóubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhhîubhò)”}”(hŒEMicroarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)”h]”h×)”}”(hj h]”hŒEMicroarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K hj ubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhhîubhò)”}”(hŒCMicroarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127)”h]”h×)”}”(hj#h]”hŒCMicroarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127)”…””}”(hj%hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h Khj!ubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhhîubhò)”}”(hŒMMicroarchitectural Data Sampling Uncacheable Memory (MDSUM) (CVE-2019-11091) ”h]”h×)”}”(hŒLMicroarchitectural Data Sampling Uncacheable Memory (MDSUM) (CVE-2019-11091)”h]”hŒLMicroarchitectural Data Sampling Uncacheable Memory (MDSUM) (CVE-2019-11091)”…””}”(hj<hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h Khj8ubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhhîubeh}”(h]”h ]”h"]”h$]”h&]”Œbullet”Œ-”uh1hìhŸh¶h K hhèubah}”(h]”h ]”h"]”h$]”h&]”uh1hæhŸh¶h K hhÅhžhubh×)”}”(hXMSBDS leaks Store Buffer Entries which can be speculatively forwarded to a dependent load (store-to-load forwarding) as an optimization. The forward can also happen to a faulting or assisting load operation for a different memory address, which can be exploited under certain conditions. Store buffers are partitioned between Hyper-Threads so cross thread forwarding is not possible. But if a thread enters or exits a sleep state the store buffer is repartitioned which can expose data from one thread to the other.”h]”hXMSBDS leaks Store Buffer Entries which can be speculatively forwarded to a dependent load (store-to-load forwarding) as an optimization. The forward can also happen to a faulting or assisting load operation for a different memory address, which can be exploited under certain conditions. Store buffers are partitioned between Hyper-Threads so cross thread forwarding is not possible. But if a thread enters or exits a sleep state the store buffer is repartitioned which can expose data from one thread to the other.”…””}”(hj^hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KhhÅhžhubh×)”}”(hX/MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage L1 miss situations and to hold data which is returned or sent in response to a memory or I/O operation. Fill buffers can forward data to a load operation and also write data to the cache. When the fill buffer is deallocated it can retain the stale data of the preceding operations which can then be forwarded to a faulting or assisting load operation, which can be exploited under certain conditions. Fill buffers are shared between Hyper-Threads so cross thread leakage is possible.”h]”hX/MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage L1 miss situations and to hold data which is returned or sent in response to a memory or I/O operation. Fill buffers can forward data to a load operation and also write data to the cache. When the fill buffer is deallocated it can retain the stale data of the preceding operations which can then be forwarded to a faulting or assisting load operation, which can be exploited under certain conditions. Fill buffers are shared between Hyper-Threads so cross thread leakage is possible.”…””}”(hjlhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KhhÅhžhubh×)”}”(hXÓMLPDS leaks Load Port Data. Load ports are used to perform load operations from memory or I/O. The received data is then forwarded to the register file or a subsequent operation. In some implementations the Load Port can contain stale data from a previous operation which can be forwarded to faulting or assisting loads under certain conditions, which again can be exploited eventually. Load ports are shared between Hyper-Threads so cross thread leakage is possible.”h]”hXÓMLPDS leaks Load Port Data. Load ports are used to perform load operations from memory or I/O. The received data is then forwarded to the register file or a subsequent operation. In some implementations the Load Port can contain stale data from a previous operation which can be forwarded to faulting or assisting loads under certain conditions, which again can be exploited eventually. Load ports are shared between Hyper-Threads so cross thread leakage is possible.”…””}”(hjzhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K"hhÅhžhubh×)”}”(hŒøMDSUM is a special case of MSBDS, MFBDS and MLPDS. An uncacheable load from memory that takes a fault or assist can leave data in a microarchitectural structure that may later be observed using one of the same methods used by MSBDS, MFBDS or MLPDS.”h]”hŒøMDSUM is a special case of MSBDS, MFBDS and MLPDS. An uncacheable load from memory that takes a fault or assist can leave data in a microarchitectural structure that may later be observed using one of the same methods used by MSBDS, MFBDS or MLPDS.”…””}”(hjˆhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K*hhÅhžhubeh}”(h]”(Œoverview”hÄeh ]”h"]”(Œoverview”Œmds”eh$]”h&]”uh1h¡hh£hžhhŸh¶h KŒexpect_referenced_by_name”}”jœh¹sŒexpect_referenced_by_id”}”hÄh¹subh¢)”}”(hhh]”(h§)”}”(hŒExposure assumptions”h]”hŒExposure assumptions”…””}”(hj¦hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hj£hžhhŸh¶h K0ubh×)”}”(hŒºIt is assumed that attack code resides in user space or in a guest with one exception. The rationale behind this assumption is that the code construct needed for exploiting MDS requires:”h]”hŒºIt is assumed that attack code resides in user space or in a guest with one exception. The rationale behind this assumption is that the code construct needed for exploiting MDS requires:”…””}”(hj´hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K2hj£hžhubhç)”}”(hŒû- to control the load to trigger a fault or assist - to have a disclosure gadget which exposes the speculatively accessed data for consumption through a side channel. - to control the pointer through which the disclosure gadget exposes the data ”h]”hí)”}”(hhh]”(hò)”}”(hŒ1to control the load to trigger a fault or assist ”h]”h×)”}”(hŒ0to control the load to trigger a fault or assist”h]”hŒ0to control the load to trigger a fault or assist”…””}”(hjÍhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K6hjÉubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhjÆubhò)”}”(hŒrto have a disclosure gadget which exposes the speculatively accessed data for consumption through a side channel. ”h]”h×)”}”(hŒqto have a disclosure gadget which exposes the speculatively accessed data for consumption through a side channel.”h]”hŒqto have a disclosure gadget which exposes the speculatively accessed data for consumption through a side channel.”…””}”(hjåhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K8hjáubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhjÆubhò)”}”(hŒLto control the pointer through which the disclosure gadget exposes the data ”h]”h×)”}”(hŒKto control the pointer through which the disclosure gadget exposes the data”h]”hŒKto control the pointer through which the disclosure gadget exposes the data”…””}”(hjýhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K;hjùubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhjÆubeh}”(h]”h ]”h"]”h$]”h&]”jVjWuh1hìhŸh¶h K6hjÂubah}”(h]”h ]”h"]”h$]”h&]”uh1hæhŸh¶h K6hj£hžhubh×)”}”(hŒThe existence of such a construct in the kernel cannot be excluded with 100% certainty, but the complexity involved makes it extremely unlikely.”h]”hŒThe existence of such a construct in the kernel cannot be excluded with 100% certainty, but the complexity involved makes it extremely unlikely.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K>hj£hžhubh×)”}”(hŒ¼There is one exception, which is untrusted BPF. The functionality of untrusted BPF is limited, but it needs to be thoroughly investigated whether it can be used to create such a construct.”h]”hŒ¼There is one exception, which is untrusted BPF. The functionality of untrusted BPF is limited, but it needs to be thoroughly investigated whether it can be used to create such a construct.”…””}”(hj+hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KAhj£hžhubeh}”(h]”Œexposure-assumptions”ah ]”h"]”Œexposure assumptions”ah$]”h&]”uh1h¡hh£hžhhŸh¶h K0ubh¢)”}”(hhh]”(h§)”}”(hŒMitigation strategy”h]”hŒMitigation strategy”…””}”(hjDhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjAhžhhŸh¶h KGubh×)”}”(hŒŽAll variants have the same mitigation strategy at least for the single CPU thread case (SMT off): Force the CPU to clear the affected buffers.”h]”hŒŽAll variants have the same mitigation strategy at least for the single CPU thread case (SMT off): Force the CPU to clear the affected buffers.”…””}”(hjRhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KIhjAhžhubh×)”}”(hŒÉThis is achieved by using the otherwise unused and obsolete VERW instruction in combination with a microcode update. The microcode clears the affected CPU buffers when the VERW instruction is executed.”h]”hŒÉThis is achieved by using the otherwise unused and obsolete VERW instruction in combination with a microcode update. The microcode clears the affected CPU buffers when the VERW instruction is executed.”…””}”(hj`hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KLhjAhžhubh×)”}”(hX For virtualization there are two ways to achieve CPU buffer clearing. Either the modified VERW instruction or via the L1D Flush command. The latter is issued when L1TF mitigation is enabled so the extra VERW can be avoided. If the CPU is not affected by L1TF then VERW needs to be issued.”h]”hX For virtualization there are two ways to achieve CPU buffer clearing. Either the modified VERW instruction or via the L1D Flush command. The latter is issued when L1TF mitigation is enabled so the extra VERW can be avoided. If the CPU is not affected by L1TF then VERW needs to be issued.”…””}”(hjnhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KPhjAhžhubh×)”}”(hŒÉIf the VERW instruction with the supplied segment selector argument is executed on a CPU without the microcode update there is no side effect other than a small number of pointlessly wasted CPU cycles.”h]”hŒÉIf the VERW instruction with the supplied segment selector argument is executed on a CPU without the microcode update there is no side effect other than a small number of pointlessly wasted CPU cycles.”…””}”(hj|hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KVhjAhžhubh×)”}”(hŒ¦This does not protect against cross Hyper-Thread attacks except for MSBDS which is only exploitable cross Hyper-thread when one of the Hyper-Threads enters a C-state.”h]”hŒ¦This does not protect against cross Hyper-Thread attacks except for MSBDS which is only exploitable cross Hyper-thread when one of the Hyper-Threads enters a C-state.”…””}”(hjŠhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KZhjAhžhubh×)”}”(hŒ=The kernel provides a function to invoke the buffer clearing:”h]”hŒ=The kernel provides a function to invoke the buffer clearing:”…””}”(hj˜hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K^hjAhžhubhç)”}”(hŒmds_clear_cpu_buffers() ”h]”h×)”}”(hŒmds_clear_cpu_buffers()”h]”hŒmds_clear_cpu_buffers()”…””}”(hjªhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K`hj¦ubah}”(h]”h ]”h"]”h$]”h&]”uh1hæhŸh¶h K`hjAhžhubh×)”}”(hŒŠAlso macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path. Other than CFLAGS.ZF, this macro doesn't clobber any registers.”h]”hŒŒAlso macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path. Other than CFLAGS.ZF, this macro doesn’t clobber any registers.”…””}”(hj¾hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KbhjAhžhubh×)”}”(hŒ_The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state (idle) transitions.”h]”hŒ_The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state (idle) transitions.”…””}”(hjÌhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KehjAhžhubh×)”}”(hX,As a special quirk to address virtualization scenarios where the host has the microcode updated, but the hypervisor does not (yet) expose the MD_CLEAR CPUID bit to guests, the kernel issues the VERW instruction in the hope that it might actually clear the buffers. The state is reflected accordingly.”h]”hX,As a special quirk to address virtualization scenarios where the host has the microcode updated, but the hypervisor does not (yet) expose the MD_CLEAR CPUID bit to guests, the kernel issues the VERW instruction in the hope that it might actually clear the buffers. The state is reflected accordingly.”…””}”(hjÚhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KhhjAhžhubh×)”}”(hŒ÷According to current knowledge additional mitigations inside the kernel itself are not required because the necessary gadgets to expose the leaked data cannot be controlled in a way which allows exploitation from malicious user space or VM guests.”h]”hŒ÷According to current knowledge additional mitigations inside the kernel itself are not required because the necessary gadgets to expose the leaked data cannot be controlled in a way which allows exploitation from malicious user space or VM guests.”…””}”(hjèhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KnhjAhžhubeh}”(h]”Œmitigation-strategy”ah ]”h"]”Œmitigation strategy”ah$]”h&]”uh1h¡hh£hžhhŸh¶h KGubh¢)”}”(hhh]”(h§)”}”(hŒ Kernel internal mitigation modes”h]”hŒ Kernel internal mitigation modes”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjþhžhhŸh¶h Ktubhç)”}”(hXœ======= ============================================================ off Mitigation is disabled. Either the CPU is not affected or mds=off is supplied on the kernel command line full Mitigation is enabled. CPU is affected and MD_CLEAR is advertised in CPUID. vmwerv Mitigation is enabled. CPU is affected and MD_CLEAR is not advertised in CPUID. That is mainly for virtualization scenarios where the host has the updated microcode but the hypervisor does not expose MD_CLEAR in CPUID. It's a best effort approach without guarantee. ======= ============================================================ ”h]”hŒtable”“”)”}”(hhh]”hŒtgroup”“”)”}”(hhh]”(hŒcolspec”“”)”}”(hhh]”h}”(h]”h ]”h"]”h$]”h&]”Œcolwidth”Kuh1jhjubj)”}”(hhh]”h}”(h]”h ]”h"]”h$]”h&]”Œcolwidth”K)”}”(hhh]”h×)”}”(hŒhMitigation is disabled. Either the CPU is not affected or mds=off is supplied on the kernel command line”h]”hŒhMitigation is disabled. Either the CPU is not affected or mds=off is supplied on the kernel command line”…””}”(hjYhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KwhjVubah}”(h]”h ]”h"]”h$]”h&]”uh1j=hj:ubeh}”(h]”h ]”h"]”h$]”h&]”uh1j8hj5ubj9)”}”(hhh]”(j>)”}”(hhh]”h×)”}”(hŒfull”h]”hŒfull”…””}”(hjyhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h Kzhjvubah}”(h]”h ]”h"]”h$]”h&]”uh1j=hjsubj>)”}”(hhh]”h×)”}”(hŒKMitigation is enabled. CPU is affected and MD_CLEAR is advertised in CPUID.”h]”hŒKMitigation is enabled. CPU is affected and MD_CLEAR is advertised in CPUID.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h Kzhjubah}”(h]”h ]”h"]”h$]”h&]”uh1j=hjsubeh}”(h]”h ]”h"]”h$]”h&]”uh1j8hj5ubj9)”}”(hhh]”(j>)”}”(hhh]”h×)”}”(hŒvmwerv”h]”hŒvmwerv”…””}”(hj°hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K}hj­ubah}”(h]”h ]”h"]”h$]”h&]”uh1j=hjªubj>)”}”(hhh]”h×)”}”(hX Mitigation is enabled. CPU is affected and MD_CLEAR is not advertised in CPUID. That is mainly for virtualization scenarios where the host has the updated microcode but the hypervisor does not expose MD_CLEAR in CPUID. It's a best effort approach without guarantee.”h]”hX Mitigation is enabled. CPU is affected and MD_CLEAR is not advertised in CPUID. That is mainly for virtualization scenarios where the host has the updated microcode but the hypervisor does not expose MD_CLEAR in CPUID. It’s a best effort approach without guarantee.”…””}”(hjÇhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K}hjÄubah}”(h]”h ]”h"]”h$]”h&]”uh1j=hjªubeh}”(h]”h ]”h"]”h$]”h&]”uh1j8hj5ubeh}”(h]”h ]”h"]”h$]”h&]”uh1j3hjubeh}”(h]”h ]”h"]”h$]”h&]”Œcols”Kuh1jhjubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjubah}”(h]”h ]”h"]”h$]”h&]”uh1hæhŸh¶h Kvhjþhžhubh×)”}”(hŒ¾If the CPU is affected and mds=off is not supplied on the kernel command line then the kernel selects the appropriate mitigation mode depending on the availability of the MD_CLEAR CPUID bit.”h]”hŒ¾If the CPU is affected and mds=off is not supplied on the kernel command line then the kernel selects the appropriate mitigation mode depending on the availability of the MD_CLEAR CPUID bit.”…””}”(hjúhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K„hjþhžhubeh}”(h]”Œ kernel-internal-mitigation-modes”ah ]”h"]”Œ kernel internal mitigation modes”ah$]”h&]”uh1h¡hh£hžhhŸh¶h Ktubh¢)”}”(hhh]”(h§)”}”(hŒMitigation points”h]”hŒMitigation points”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjhžhhŸh¶h K‰ubh¢)”}”(hhh]”(h§)”}”(hŒ1. Return to user space”h]”hŒ1. Return to user space”…””}”(hj$hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hj!hžhhŸh¶h KŒubhç)”}”(hX+When transitioning from kernel to user space the CPU buffers are flushed on affected CPUs when the mitigation is not disabled on the kernel command line. The mitigation is enabled through the feature flag X86_FEATURE_CLEAR_CPU_BUF. The mitigation is invoked just before transitioning to userspace after user registers are restored. This is done to minimize the window in which kernel data could be accessed after VERW e.g. via an NMI after VERW. **Corner case not handled** Interrupts returning to kernel don't clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don't clear CPU buffers because: 1. It is rare to get an NMI after VERW, but before returning to userspace. 2. For an unprivileged user, there is no known way to make that NMI less rare or target it. 3. It would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth. 4. The NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. What's left is only the data that NMI touches, and that may or may not be of any interest. ”h]”(h×)”}”(hŒçWhen transitioning from kernel to user space the CPU buffers are flushed on affected CPUs when the mitigation is not disabled on the kernel command line. The mitigation is enabled through the feature flag X86_FEATURE_CLEAR_CPU_BUF.”h]”hŒçWhen transitioning from kernel to user space the CPU buffers are flushed on affected CPUs when the mitigation is not disabled on the kernel command line. The mitigation is enabled through the feature flag X86_FEATURE_CLEAR_CPU_BUF.”…””}”(hj6hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KŽhj2ubh×)”}”(hŒÕThe mitigation is invoked just before transitioning to userspace after user registers are restored. This is done to minimize the window in which kernel data could be accessed after VERW e.g. via an NMI after VERW.”h]”hŒÕThe mitigation is invoked just before transitioning to userspace after user registers are restored. This is done to minimize the window in which kernel data could be accessed after VERW e.g. via an NMI after VERW.”…””}”(hjDhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K“hj2ubh×)”}”(hXX**Corner case not handled** Interrupts returning to kernel don't clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don't clear CPU buffers because:”h]”(hŒstrong”“”)”}”(hŒ**Corner case not handled**”h]”hŒCorner case not handled”…””}”(hjXhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jVhjRubhXA Interrupts returning to kernel don’t clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don’t clear CPU buffers because:”…””}”(hjRhžhhŸNh Nubeh}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K˜hj2ubhŒenumerated_list”“”)”}”(hhh]”(hò)”}”(hŒGIt is rare to get an NMI after VERW, but before returning to userspace.”h]”h×)”}”(hjwh]”hŒGIt is rare to get an NMI after VERW, but before returning to userspace.”…””}”(hjyhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KŸhjuubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhjrubhò)”}”(hŒXFor an unprivileged user, there is no known way to make that NMI less rare or target it.”h]”h×)”}”(hŒXFor an unprivileged user, there is no known way to make that NMI less rare or target it.”h]”hŒXFor an unprivileged user, there is no known way to make that NMI less rare or target it.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K hjŒubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhjrubhò)”}”(hŒIt would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth.”h]”h×)”}”(hŒIt would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth.”h]”hŒIt would take a large number of these precisely-timed NMIs to mount an actual attack. There’s presumably not enough bandwidth.”…””}”(hj¨hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K¢hj¤ubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhjrubhò)”}”(hŒÕThe NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. What's left is only the data that NMI touches, and that may or may not be of any interest. ”h]”h×)”}”(hŒÓThe NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. What's left is only the data that NMI touches, and that may or may not be of any interest.”h]”hŒÕThe NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. What’s left is only the data that NMI touches, and that may or may not be of any interest.”…””}”(hjÀhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K¤hj¼ubah}”(h]”h ]”h"]”h$]”h&]”uh1hñhjrubeh}”(h]”h ]”h"]”h$]”h&]”Œenumtype”Œarabic”Œprefix”hŒsuffix”Œ.”uh1jphj2ubeh}”(h]”h ]”h"]”h$]”h&]”uh1hæhŸh¶h KŽhj!hžhubeh}”(h]”Œreturn-to-user-space”ah ]”h"]”Œ1. return to user space”ah$]”h&]”uh1h¡hjhžhhŸh¶h KŒubh¢)”}”(hhh]”(h§)”}”(hŒ2. C-State transition”h]”hŒ2. C-State transition”…””}”(hjðhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjíhžhhŸh¶h K«ubhç)”}”(hX‰When a CPU goes idle and enters a C-State the CPU buffers need to be cleared on affected CPUs when SMT is active. This addresses the repartitioning of the store buffer when one of the Hyper-Threads enters a C-State. When SMT is inactive, i.e. either the CPU does not support it or all sibling threads are offline CPU buffer clearing is not required. The idle clearing is enabled on CPUs which are only affected by MSBDS and not by any other MDS variant. The other MDS variants cannot be protected against cross Hyper-Thread attacks because the Fill Buffer and the Load Ports are shared. So on CPUs affected by other variants, the idle clearing would be a window dressing exercise and is therefore not activated. The invocation is controlled by the static key mds_idle_clear which is switched depending on the chosen mitigation mode and the SMT state of the system. The buffer clear is only invoked before entering the C-State to prevent that stale data from the idling CPU from spilling to the Hyper-Thread sibling after the store buffer got repartitioned and all entries are available to the non idle sibling. When coming out of idle the store buffer is partitioned again so each sibling has half of it available. The back from idle CPU could be then speculatively exposed to contents of the sibling. The buffers are flushed either on exit to user space or on VMENTER so malicious code in user space or the guest cannot speculatively access them. The mitigation is hooked into all variants of halt()/mwait(), but does not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver has been superseded by the intel_idle driver around 2010 and is preferred on all affected CPUs which are expected to gain the MD_CLEAR functionality in microcode. Aside of that the IO-Port mechanism is a legacy interface which is only used on older systems which are either not affected or do not receive microcode updates anymore.”h]”(h×)”}”(hŒ×When a CPU goes idle and enters a C-State the CPU buffers need to be cleared on affected CPUs when SMT is active. This addresses the repartitioning of the store buffer when one of the Hyper-Threads enters a C-State.”h]”hŒ×When a CPU goes idle and enters a C-State the CPU buffers need to be cleared on affected CPUs when SMT is active. This addresses the repartitioning of the store buffer when one of the Hyper-Threads enters a C-State.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K­hjþubh×)”}”(hŒ…When SMT is inactive, i.e. either the CPU does not support it or all sibling threads are offline CPU buffer clearing is not required.”h]”hŒ…When SMT is inactive, i.e. either the CPU does not support it or all sibling threads are offline CPU buffer clearing is not required.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K²hjþubh×)”}”(hXiThe idle clearing is enabled on CPUs which are only affected by MSBDS and not by any other MDS variant. The other MDS variants cannot be protected against cross Hyper-Thread attacks because the Fill Buffer and the Load Ports are shared. So on CPUs affected by other variants, the idle clearing would be a window dressing exercise and is therefore not activated.”h]”hXiThe idle clearing is enabled on CPUs which are only affected by MSBDS and not by any other MDS variant. The other MDS variants cannot be protected against cross Hyper-Thread attacks because the Fill Buffer and the Load Ports are shared. So on CPUs affected by other variants, the idle clearing would be a window dressing exercise and is therefore not activated.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h Kµhjþubh×)”}”(hŒ˜The invocation is controlled by the static key mds_idle_clear which is switched depending on the chosen mitigation mode and the SMT state of the system.”h]”hŒ˜The invocation is controlled by the static key mds_idle_clear which is switched depending on the chosen mitigation mode and the SMT state of the system.”…””}”(hj,hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h K¼hjþubh×)”}”(hŒõThe buffer clear is only invoked before entering the C-State to prevent that stale data from the idling CPU from spilling to the Hyper-Thread sibling after the store buffer got repartitioned and all entries are available to the non idle sibling.”h]”hŒõThe buffer clear is only invoked before entering the C-State to prevent that stale data from the idling CPU from spilling to the Hyper-Thread sibling after the store buffer got repartitioned and all entries are available to the non idle sibling.”…””}”(hj:hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KÀhjþubh×)”}”(hXPWhen coming out of idle the store buffer is partitioned again so each sibling has half of it available. The back from idle CPU could be then speculatively exposed to contents of the sibling. The buffers are flushed either on exit to user space or on VMENTER so malicious code in user space or the guest cannot speculatively access them.”h]”hXPWhen coming out of idle the store buffer is partitioned again so each sibling has half of it available. The back from idle CPU could be then speculatively exposed to contents of the sibling. The buffers are flushed either on exit to user space or on VMENTER so malicious code in user space or the guest cannot speculatively access them.”…””}”(hjHhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KÅhjþubh×)”}”(hXÛThe mitigation is hooked into all variants of halt()/mwait(), but does not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver has been superseded by the intel_idle driver around 2010 and is preferred on all affected CPUs which are expected to gain the MD_CLEAR functionality in microcode. Aside of that the IO-Port mechanism is a legacy interface which is only used on older systems which are either not affected or do not receive microcode updates anymore.”h]”hXÛThe mitigation is hooked into all variants of halt()/mwait(), but does not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver has been superseded by the intel_idle driver around 2010 and is preferred on all affected CPUs which are expected to gain the MD_CLEAR functionality in microcode. Aside of that the IO-Port mechanism is a legacy interface which is only used on older systems which are either not affected or do not receive microcode updates anymore.”…””}”(hjVhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhŸh¶h KËhjþubeh}”(h]”h ]”h"]”h$]”h&]”uh1hæhŸh¶h K­hjíhžhubeh}”(h]”Œc-state-transition”ah ]”h"]”Œ2. c-state transition”ah$]”h&]”uh1h¡hjhžhhŸh¶h K«ubeh}”(h]”Œmitigation-points”ah ]”h"]”Œmitigation points”ah$]”h&]”uh1h¡hh£hžhhŸh¶h K‰ubeh}”(h]”Œ/microarchitectural-data-sampling-mds-mitigation”ah ]”h"]”Œ1microarchitectural data sampling (mds) mitigation”ah$]”h&]”uh1h¡hhhžhhŸh¶h Kubeh}”(h]”h ]”h"]”h$]”h&]”Œsource”h¶uh1hŒcurrent_source”NŒ current_line”NŒsettings”Œdocutils.frontend”ŒValues”“”)”}”(h¦NŒ generator”NŒ datestamp”NŒ source_link”NŒ source_url”NŒ toc_backlinks”j=Œfootnote_backlinks”KŒ sectnum_xform”KŒstrip_comments”NŒstrip_elements_with_classes”NŒ strip_classes”NŒ report_level”KŒ halt_level”KŒexit_status_level”KŒdebug”NŒwarning_stream”NŒ traceback”ˆŒinput_encoding”Œ utf-8-sig”Œinput_encoding_error_handler”Œstrict”Œoutput_encoding”Œutf-8”Œoutput_encoding_error_handler”j¤Œerror_encoding”Œutf-8”Œerror_encoding_error_handler”Œbackslashreplace”Œ language_code”Œen”Œrecord_dependencies”NŒconfig”NŒ id_prefix”hŒauto_id_prefix”Œid”Œ dump_settings”NŒdump_internals”NŒdump_transforms”NŒdump_pseudo_xml”NŒexpose_internals”NŒstrict_visitor”NŒ_disable_config”NŒ_source”h¶Œ _destination”NŒ _config_files”]”Œ7/var/lib/git/docbuild/linux/Documentation/docutils.conf”aŒfile_insertion_enabled”ˆŒ raw_enabled”KŒline_length_limit”M'Œpep_references”NŒ pep_base_url”Œhttps://peps.python.org/”Œpep_file_url_template”Œpep-%04d”Œrfc_references”NŒ rfc_base_url”Œ&https://datatracker.ietf.org/doc/html/”Œ tab_width”KŒtrim_footnote_reference_space”‰Œsyntax_highlight”Œlong”Œ smart_quotes”ˆŒsmartquotes_locales”]”Œcharacter_level_inline_markup”‰Œdoctitle_xform”‰Œ docinfo_xform”KŒsectsubtitle_xform”‰Œ image_loading”Œlink”Œembed_stylesheet”‰Œcloak_email_addresses”ˆŒsection_self_link”‰Œenv”NubŒreporter”NŒindirect_targets”]”Œsubstitution_defs”}”Œsubstitution_names”}”Œrefnames”}”Œrefids”}”hÄ]”h¹asŒnameids”}”(jj|jœhÄj›j˜j>j;jûjøj j jwjtjêjçjojluŒ nametypes”}”(j‰jœˆj›‰j>‰jû‰j ‰jw‰jê‰jo‰uh}”(j|h£hÄhÅj˜hÅj;j£jøjAj jþjtjjçj!jljíuŒ footnote_refs”}”Œ citation_refs”}”Œ autofootnotes”]”Œautofootnote_refs”]”Œsymbol_footnotes”]”Œsymbol_footnote_refs”]”Œ footnotes”]”Œ citations”]”Œautofootnote_start”KŒsymbol_footnote_start”KŒ id_counter”Œ collections”ŒCounter”“”}”…”R”Œparse_messages”]”Œtransform_messages”]”hŒsystem_message”“”)”}”(hhh]”h×)”}”(hhh]”hŒ)Hyperlink target "mds" is not referenced.”…””}”hjsbah}”(h]”h ]”h"]”h$]”h&]”uh1hÖhj ubah}”(h]”h ]”h"]”h$]”h&]”Œlevel”KŒtype”ŒINFO”Œsource”h¶Œline”Kuh1j ubaŒ transformer”NŒ include_log”]”Œ decoration”Nhžhub.