€•?ÔŒsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ /translations/zh_CN/arch/x86/tdx”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/zh_TW/arch/x86/tdx”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/it_IT/arch/x86/tdx”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/ja_JP/arch/x86/tdx”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/ko_KR/arch/x86/tdx”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒPortuguese (Brazilian)”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/pt_BR/arch/x86/tdx”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh–sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ /translations/sp_SP/arch/x86/tdx”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒcomment”“”)”}”(hŒ SPDX-License-Identifier: GPL-2.0”h]”hŒ SPDX-License-Identifier: GPL-2.0”…””}”hh·sbah}”(h]”h ]”h"]”h$]”h&]”Œ xml:space”Œpreserve”uh1hµhhh²hh³Œ:/var/lib/git/docbuild/linux/Documentation/arch/x86/tdx.rst”h´KubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒ#Intel Trust Domain Extensions (TDX)”h]”hŒ#Intel Trust Domain Extensions (TDX)”…””}”(hhÏh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhhÊh²hh³hÇh´KubhŒ paragraph”“”)”}”(hX/Intel's Trust Domain Extensions (TDX) protect confidential guest VMs from the host and physical attacks by isolating the guest register state and by encrypting the guest memory. In TDX, a special module running in a special mode sits between the host and the guest and manages the guest/host separation.”h]”hX1Intel’s Trust Domain Extensions (TDX) protect confidential guest VMs from the host and physical attacks by isolating the guest register state and by encrypting the guest memory. In TDX, a special module running in a special mode sits between the host and the guest and manages the guest/host separation.”…””}”(hhßh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KhhÊh²hubhÉ)”}”(hhh]”(hÎ)”}”(hŒTDX Host Kernel Support”h]”hŒTDX Host Kernel Support”…””}”(hhðh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhhíh²hh³hÇh´KubhÞ)”}”(hX%TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM) and a new isolated range pointed by the SEAM Ranger Register (SEAMRR). A CPU-attested software module called 'the TDX module' runs inside the new isolated range to provide the functionalities to manage and run protected VMs.”h]”hX)TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM) and a new isolated range pointed by the SEAM Ranger Register (SEAMRR). A CPU-attested software module called ‘the TDX module’ runs inside the new isolated range to provide the functionalities to manage and run protected VMs.”…””}”(hhþh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Khhíh²hubhÞ)”}”(hXTDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs as TDX private KeyIDs, which are only accessible within the SEAM mode. BIOS is responsible for partitioning legacy MKTME KeyIDs and TDX KeyIDs.”h]”hXTDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs as TDX private KeyIDs, which are only accessible within the SEAM mode. BIOS is responsible for partitioning legacy MKTME KeyIDs and TDX KeyIDs.”…””}”(hj h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Khhíh²hubhÞ)”}”(hŒþBefore the TDX module can be used to create and run protected VMs, it must be loaded into the isolated range and properly initialized. The TDX architecture doesn't require the BIOS to load the TDX module, but the kernel assumes it is loaded by the BIOS.”h]”hXBefore the TDX module can be used to create and run protected VMs, it must be loaded into the isolated range and properly initialized. The TDX architecture doesn’t require the BIOS to load the TDX module, but the kernel assumes it is loaded by the BIOS.”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Khhíh²hubhÉ)”}”(hhh]”(hÎ)”}”(hŒTDX boot-time detection”h]”hŒTDX boot-time detection”…””}”(hj+h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj(h²hh³hÇh´K!ubhÞ)”}”(hŒ{The kernel detects TDX by detecting TDX private KeyIDs during kernel boot. Below dmesg shows when TDX is enabled by BIOS::”h]”hŒzThe kernel detects TDX by detecting TDX private KeyIDs during kernel boot. Below dmesg shows when TDX is enabled by BIOS:”…””}”(hj9h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K#hj(h²hubhŒ literal_block”“”)”}”(hŒ:[..] virt/tdx: BIOS enabled: private KeyID range: [16, 64)”h]”hŒ:[..] virt/tdx: BIOS enabled: private KeyID range: [16, 64)”…””}”hjIsbah}”(h]”h ]”h"]”h$]”h&]”hÅhÆuh1jGh³hÇh´K&hj(h²hubeh}”(h]”Œtdx-boot-time-detection”ah ]”h"]”Œtdx boot-time detection”ah$]”h&]”uh1hÈhhíh²hh³hÇh´K!ubhÉ)”}”(hhh]”(hÎ)”}”(hŒTDX module initialization”h]”hŒTDX module initialization”…””}”(hjbh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj_h²hh³hÇh´K)ubhÞ)”}”(hŒThe kernel talks to the TDX module via the new SEAMCALL instruction. The TDX module implements SEAMCALL leaf functions to allow the kernel to initialize it.”h]”hŒThe kernel talks to the TDX module via the new SEAMCALL instruction. The TDX module implements SEAMCALL leaf functions to allow the kernel to initialize it.”…””}”(hjph²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K+hj_h²hubhÞ)”}”(hŒ³If the TDX module isn't loaded, the SEAMCALL instruction fails with a special error. In this case the kernel fails the module initialization and reports the module isn't loaded::”h]”hŒ¶If the TDX module isn’t loaded, the SEAMCALL instruction fails with a special error. In this case the kernel fails the module initialization and reports the module isn’t loaded:”…””}”(hj~h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K/hj_h²hubjH)”}”(hŒ [..] virt/tdx: module not loaded”h]”hŒ [..] virt/tdx: module not loaded”…””}”hjŒsbah}”(h]”h ]”h"]”h$]”h&]”hÅhÆuh1jGh³hÇh´K3hj_h²hubhÞ)”}”(hX'Initializing the TDX module consumes roughly ~1/256th system RAM size to use it as 'metadata' for the TDX memory. It also takes additional CPU time to initialize those metadata along with the TDX module itself. Both are not trivial. The kernel initializes the TDX module at runtime on demand.”h]”hX+Initializing the TDX module consumes roughly ~1/256th system RAM size to use it as ‘metadata’ for the TDX memory. It also takes additional CPU time to initialize those metadata along with the TDX module itself. Both are not trivial. The kernel initializes the TDX module at runtime on demand.”…””}”(hjšh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K5hj_h²hubhÞ)”}”(hŒ’Besides initializing the TDX module, a per-cpu initialization SEAMCALL must be done on one cpu before any other SEAMCALLs can be made on that cpu.”h]”hŒ’Besides initializing the TDX module, a per-cpu initialization SEAMCALL must be done on one cpu before any other SEAMCALLs can be made on that cpu.”…””}”(hj¨h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K;hj_h²hubhÞ)”}”(hŒJUser can consult dmesg to see whether the TDX module has been initialized.”h]”hŒJUser can consult dmesg to see whether the TDX module has been initialized.”…””}”(hj¶h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K?hj_h²hubhÞ)”}”(hŒQIf the TDX module is initialized successfully, dmesg shows something like below::”h]”hŒPIf the TDX module is initialized successfully, dmesg shows something like below:”…””}”(hjÄh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KAhj_h²hubjH)”}”(hŒR[..] virt/tdx: 262668 KBs allocated for PAMT [..] virt/tdx: TDX-Module initialized”h]”hŒR[..] virt/tdx: 262668 KBs allocated for PAMT [..] virt/tdx: TDX-Module initialized”…””}”hjÒsbah}”(h]”h ]”h"]”h$]”h&]”hÅhÆuh1jGh³hÇh´KDhj_h²hubhÞ)”}”(hŒRIf the TDX module failed to initialize, dmesg also shows it failed to initialize::”h]”hŒQIf the TDX module failed to initialize, dmesg also shows it failed to initialize:”…””}”(hjàh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KGhj_h²hubjH)”}”(hŒ3[..] virt/tdx: TDX-Module initialization failed ...”h]”hŒ3[..] virt/tdx: TDX-Module initialization failed ...”…””}”hjîsbah}”(h]”h ]”h"]”h$]”h&]”hÅhÆuh1jGh³hÇh´KJhj_h²hubeh}”(h]”Œtdx-module-initialization”ah ]”h"]”Œtdx module initialization”ah$]”h&]”uh1hÈhhíh²hh³hÇh´K)ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ*TDX Interaction to Other Kernel Components”h]”hŒ*TDX Interaction to Other Kernel Components”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´KMubhÉ)”}”(hhh]”(hÎ)”}”(hŒTDX Memory Policy”h]”hŒTDX Memory Policy”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´KPubhÞ)”}”(hXGTDX reports a list of "Convertible Memory Region" (CMR) to tell the kernel which memory is TDX compatible. The kernel needs to build a list of memory regions (out of CMRs) as "TDX-usable" memory and pass those regions to the TDX module. Once this is done, those "TDX-usable" memory regions are fixed during module's lifetime.”h]”hXUTDX reports a list of “Convertible Memory Region†(CMR) to tell the kernel which memory is TDX compatible. The kernel needs to build a list of memory regions (out of CMRs) as “TDX-usable†memory and pass those regions to the TDX module. Once this is done, those “TDX-usable†memory regions are fixed during module’s lifetime.”…””}”(hj&h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KRhjh²hubhÞ)”}”(hX8To keep things simple, currently the kernel simply guarantees all pages in the page allocator are TDX memory. Specifically, the kernel uses all system memory in the core-mm "at the time of TDX module initialization" as TDX memory, and in the meantime, refuses to online any non-TDX-memory in the memory hotplug.”h]”hX<To keep things simple, currently the kernel simply guarantees all pages in the page allocator are TDX memory. Specifically, the kernel uses all system memory in the core-mm “at the time of TDX module initialization†as TDX memory, and in the meantime, refuses to online any non-TDX-memory in the memory hotplug.”…””}”(hj4h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KXhjh²hubeh}”(h]”Œtdx-memory-policy”ah ]”h"]”Œtdx memory policy”ah$]”h&]”uh1hÈhjh²hh³hÇh´KPubhÉ)”}”(hhh]”(hÎ)”}”(hŒPhysical Memory Hotplug”h]”hŒPhysical Memory Hotplug”…””}”(hjMh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjJh²hh³hÇh´K_ubhÞ)”}”(hX Note TDX assumes convertible memory is always physically present during machine's runtime. A non-buggy BIOS should never support hot-removal of any convertible memory. This implementation doesn't handle ACPI memory removal but depends on the BIOS to behave correctly.”h]”hXNote TDX assumes convertible memory is always physically present during machine’s runtime. A non-buggy BIOS should never support hot-removal of any convertible memory. This implementation doesn’t handle ACPI memory removal but depends on the BIOS to behave correctly.”…””}”(hj[h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KahjJh²hubeh}”(h]”Œphysical-memory-hotplug”ah ]”h"]”Œphysical memory hotplug”ah$]”h&]”uh1hÈhjh²hh³hÇh´K_ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ CPU Hotplug”h]”hŒ CPU Hotplug”…””}”(hjth²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjqh²hh³hÇh´KgubhÞ)”}”(hŒùTDX module requires the per-cpu initialization SEAMCALL must be done on one cpu before any other SEAMCALLs can be made on that cpu. The kernel, via the CPU hotplug framework, performs the necessary initialization when a CPU is first brought online.”h]”hŒùTDX module requires the per-cpu initialization SEAMCALL must be done on one cpu before any other SEAMCALLs can be made on that cpu. The kernel, via the CPU hotplug framework, performs the necessary initialization when a CPU is first brought online.”…””}”(hj‚h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Kihjqh²hubhÞ)”}”(hXNTDX doesn't support physical (ACPI) CPU hotplug. During machine boot, TDX verifies all boot-time present logical CPUs are TDX compatible before enabling TDX. A non-buggy BIOS should never support hot-add/removal of physical CPU. Currently the kernel doesn't handle physical CPU hotplug, but depends on the BIOS to behave correctly.”h]”hXRTDX doesn’t support physical (ACPI) CPU hotplug. During machine boot, TDX verifies all boot-time present logical CPUs are TDX compatible before enabling TDX. A non-buggy BIOS should never support hot-add/removal of physical CPU. Currently the kernel doesn’t handle physical CPU hotplug, but depends on the BIOS to behave correctly.”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Knhjqh²hubhÞ)”}”(hŒxNote TDX works with CPU logical online/offline, thus the kernel still allows to offline logical CPU and online it again.”h]”hŒxNote TDX works with CPU logical online/offline, thus the kernel still allows to offline logical CPU and online it again.”…””}”(hjžh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Kthjqh²hubeh}”(h]”Œ cpu-hotplug”ah ]”h"]”Œ cpu hotplug”ah$]”h&]”uh1hÈhjh²hh³hÇh´KgubhÉ)”}”(hhh]”(hÎ)”}”(hŒErratum”h]”hŒErratum”…””}”(hj·h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj´h²hh³hÇh´KxubhÞ)”}”(hŒÖThe first few generations of TDX hardware have an erratum. A partial write to a TDX private memory cacheline will silently "poison" the line. Subsequent reads will consume the poison and generate a machine check.”h]”hŒÚThe first few generations of TDX hardware have an erratum. A partial write to a TDX private memory cacheline will silently “poison†the line. Subsequent reads will consume the poison and generate a machine check.”…””}”(hjÅh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Kzhj´h²hubhÞ)”}”(hXA partial write is a memory write where a write transaction of less than cacheline lands at the memory controller. The CPU does these via non-temporal write instructions (like MOVNTI), or through UC/WC memory mappings. Devices can also do partial writes via DMA.”h]”hXA partial write is a memory write where a write transaction of less than cacheline lands at the memory controller. The CPU does these via non-temporal write instructions (like MOVNTI), or through UC/WC memory mappings. Devices can also do partial writes via DMA.”…””}”(hjÓh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Khj´h²hubhÞ)”}”(hXTheoretically, a kernel bug could do partial write to TDX private memory and trigger unexpected machine check. What's more, the machine check code will present these as "Hardware error" when they were, in fact, a software-triggered issue. But in the end, this issue is hard to trigger.”h]”hX%Theoretically, a kernel bug could do partial write to TDX private memory and trigger unexpected machine check. What’s more, the machine check code will present these as “Hardware error†when they were, in fact, a software-triggered issue. But in the end, this issue is hard to trigger.”…””}”(hjáh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K„hj´h²hubhÞ)”}”(hŒ±If the platform has such erratum, the kernel prints additional message in machine check handler to tell user the machine check may be caused by kernel bug on TDX private memory.”h]”hŒ±If the platform has such erratum, the kernel prints additional message in machine check handler to tell user the machine check may be caused by kernel bug on TDX private memory.”…””}”(hjïh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K‰hj´h²hubeh}”(h]”Œerratum”ah ]”h"]”Œerratum”ah$]”h&]”uh1hÈhjh²hh³hÇh´KxubhÉ)”}”(hhh]”(hÎ)”}”(hŒKexec”h]”hŒKexec”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´KŽubhÞ)”}”(hŒŸCurrently kexec doesn't work on the TDX platforms with the aforementioned erratum. It fails when loading the kexec kernel image. Otherwise it works normally.”h]”hŒ¡Currently kexec doesn’t work on the TDX platforms with the aforementioned erratum. It fails when loading the kexec kernel image. Otherwise it works normally.”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Khjh²hubeh}”(h]”Œkexec”ah ]”h"]”Œkexec”ah$]”h&]”uh1hÈhjh²hh³hÇh´KŽubhÉ)”}”(hhh]”(hÎ)”}”(hŒ#Interaction vs S3 and deeper states”h]”hŒ#Interaction vs S3 and deeper states”…””}”(hj/h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj,h²hh³hÇh´K•ubhÞ)”}”(hŒÂTDX cannot survive from S3 and deeper states. The hardware resets and disables TDX completely when platform goes to S3 and deeper. Both TDX guests and the TDX module get destroyed permanently.”h]”hŒÂTDX cannot survive from S3 and deeper states. The hardware resets and disables TDX completely when platform goes to S3 and deeper. Both TDX guests and the TDX module get destroyed permanently.”…””}”(hj=h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K—hj,h²hubhÞ)”}”(hŒ»The kernel uses S3 for suspend-to-ram, and use S4 and deeper states for hibernation. Currently, for simplicity, the kernel chooses to make TDX mutually exclusive with S3 and hibernation.”h]”hŒ»The kernel uses S3 for suspend-to-ram, and use S4 and deeper states for hibernation. Currently, for simplicity, the kernel chooses to make TDX mutually exclusive with S3 and hibernation.”…””}”(hjKh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K›hj,h²hubhÞ)”}”(hŒQThe kernel disables TDX during early boot when hibernation support is available::”h]”hŒPThe kernel disables TDX during early boot when hibernation support is available:”…””}”(hjYh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KŸhj,h²hubjH)”}”(hŒD[..] virt/tdx: initialization failed: Hibernation support is enabled”h]”hŒD[..] virt/tdx: initialization failed: Hibernation support is enabled”…””}”hjgsbah}”(h]”h ]”h"]”h$]”h&]”hÅhÆuh1jGh³hÇh´K¢hj,h²hubhÞ)”}”(hŒQAdd 'nohibernate' kernel command line to disable hibernation in order to use TDX.”h]”hŒUAdd ‘nohibernate’ kernel command line to disable hibernation in order to use TDX.”…””}”(hjuh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K¤hj,h²hubhÞ)”}”(hŒACPI S3 is disabled during kernel early boot if TDX is enabled. The user needs to turn off TDX in the BIOS in order to use S3.”h]”hŒACPI S3 is disabled during kernel early boot if TDX is enabled. The user needs to turn off TDX in the BIOS in order to use S3.”…””}”(hjƒh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K§hj,h²hubeh}”(h]”Œ#interaction-vs-s3-and-deeper-states”ah ]”h"]”Œ#interaction vs s3 and deeper states”ah$]”h&]”uh1hÈhjh²hh³hÇh´K•ubeh}”(h]”Œ*tdx-interaction-to-other-kernel-components”ah ]”h"]”Œ*tdx interaction to other kernel components”ah$]”h&]”uh1hÈhhíh²hh³hÇh´KMubeh}”(h]”Œtdx-host-kernel-support”ah ]”h"]”Œtdx host kernel support”ah$]”h&]”uh1hÈhhÊh²hh³hÇh´KubhÉ)”}”(hhh]”(hÎ)”}”(hŒTDX Guest Support”h]”hŒTDX Guest Support”…””}”(hj¬h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj©h²hh³hÇh´K«ubhÞ)”}”(hXLSince the host cannot directly access guest registers or memory, much normal functionality of a hypervisor must be moved into the guest. This is implemented using a Virtualization Exception (#VE) that is handled by the guest kernel. A #VE is handled entirely inside the guest kernel, but some require the hypervisor to be consulted.”h]”hXLSince the host cannot directly access guest registers or memory, much normal functionality of a hypervisor must be moved into the guest. This is implemented using a Virtualization Exception (#VE) that is handled by the guest kernel. A #VE is handled entirely inside the guest kernel, but some require the hypervisor to be consulted.”…””}”(hjºh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K¬hj©h²hubhÞ)”}”(hŒpTDX includes new hypercall-like mechanisms for communicating from the guest to the hypervisor or the TDX module.”h]”hŒpTDX includes new hypercall-like mechanisms for communicating from the guest to the hypervisor or the TDX module.”…””}”(hjÈh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K²hj©h²hubhÉ)”}”(hhh]”(hÎ)”}”(hŒNew TDX Exceptions”h]”hŒNew TDX Exceptions”…””}”(hjÙh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjÖh²hh³hÇh´K¶ubhÞ)”}”(hŒªTDX guests behave differently from bare-metal and traditional VMX guests. In TDX guests, otherwise normal instructions or memory accesses can cause #VE or #GP exceptions.”h]”hŒªTDX guests behave differently from bare-metal and traditional VMX guests. In TDX guests, otherwise normal instructions or memory accesses can cause #VE or #GP exceptions.”…””}”(hjçh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K¸hjÖh²hubhÞ)”}”(hŒxInstructions marked with an '*' conditionally cause exceptions. The details for these instructions are discussed below.”h]”hŒ|Instructions marked with an ‘*’ conditionally cause exceptions. The details for these instructions are discussed below.”…””}”(hjõh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K¼hjÖh²hubhÉ)”}”(hhh]”(hÎ)”}”(hŒInstruction-based #VE”h]”hŒInstruction-based #VE”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´KÀubhŒ bullet_list”“”)”}”(hhh]”(hŒ list_item”“”)”}”(hŒPort I/O (INS, OUTS, IN, OUT)”h]”hÞ)”}”(hjh]”hŒPort I/O (INS, OUTS, IN, OUT)”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÂhjubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjh²hh³hÇh´Nubj)”}”(hŒHLT”h]”hÞ)”}”(hj4h]”hŒHLT”…””}”(hj6h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÃhj2ubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjh²hh³hÇh´Nubj)”}”(hŒMONITOR, MWAIT”h]”hÞ)”}”(hjKh]”hŒMONITOR, MWAIT”…””}”(hjMh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÄhjIubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjh²hh³hÇh´Nubj)”}”(hŒ WBINVD, INVD”h]”hÞ)”}”(hjbh]”hŒ WBINVD, INVD”…””}”(hjdh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÅhj`ubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjh²hh³hÇh´Nubj)”}”(hŒVMCALL”h]”hÞ)”}”(hjyh]”hŒVMCALL”…””}”(hj{h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÆhjwubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjh²hh³hÇh´Nubj)”}”(hŒ RDMSR*,WRMSR*”h]”hÞ)”}”(hjh]”hŒ RDMSR*,WRMSR*”…””}”(hj’h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÇhjŽubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjh²hh³hÇh´Nubj)”}”(hŒCPUID* ”h]”hÞ)”}”(hŒCPUID*”h]”hŒCPUID*”…””}”(hj©h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÈhj¥ubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjh²hh³hÇh´Nubeh}”(h]”h ]”h"]”h$]”h&]”Œbullet”Œ-”uh1jh³hÇh´KÂhjh²hubeh}”(h]”Œinstruction-based-ve”ah ]”h"]”Œinstruction-based #ve”ah$]”h&]”uh1hÈhjÖh²hh³hÇh´KÀubhÉ)”}”(hhh]”(hÎ)”}”(hŒInstruction-based #GP”h]”hŒInstruction-based #GP”…””}”(hjÐh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjÍh²hh³hÇh´KËubj)”}”(hhh]”(j)”}”(hŒ|All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH, VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON”h]”hÞ)”}”(hŒ|All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH, VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON”h]”hŒ|All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH, VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON”…””}”(hjåh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÍhjáubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjÞh²hh³hÇh´Nubj)”}”(hŒ ENCLS, ENCLU”h]”hÞ)”}”(hjûh]”hŒ ENCLS, ENCLU”…””}”(hjýh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÏhjùubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjÞh²hh³hÇh´Nubj)”}”(hŒGETSEC”h]”hÞ)”}”(hjh]”hŒGETSEC”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÐhjubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjÞh²hh³hÇh´Nubj)”}”(hŒRSM”h]”hÞ)”}”(hj)h]”hŒRSM”…””}”(hj+h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÑhj'ubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjÞh²hh³hÇh´Nubj)”}”(hŒENQCMD”h]”hÞ)”}”(hj@h]”hŒENQCMD”…””}”(hjBh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÒhj>ubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjÞh²hh³hÇh´Nubj)”}”(hŒRDMSR*,WRMSR* ”h]”hÞ)”}”(hŒ RDMSR*,WRMSR*”h]”hŒ RDMSR*,WRMSR*”…””}”(hjYh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÓhjUubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjÞh²hh³hÇh´Nubeh}”(h]”h ]”h"]”h$]”h&]”jÃjÄuh1jh³hÇh´KÍhjÍh²hubeh}”(h]”Œinstruction-based-gp”ah ]”h"]”Œinstruction-based #gp”ah$]”h&]”uh1hÈhjÖh²hh³hÇh´KËubhÉ)”}”(hhh]”(hÎ)”}”(hŒRDMSR/WRMSR Behavior”h]”hŒRDMSR/WRMSR Behavior”…””}”(hj~h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj{h²hh³hÇh´KÖubhÞ)”}”(hŒ0MSR access behavior falls into three categories:”h]”hŒ0MSR access behavior falls into three categories:”…””}”(hjŒh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KØhj{h²hubj)”}”(hhh]”(j)”}”(hŒ #GP generated”h]”hÞ)”}”(hjŸh]”hŒ #GP generated”…””}”(hj¡h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÚhjubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjšh²hh³hÇh´Nubj)”}”(hŒ #VE generated”h]”hÞ)”}”(hj¶h]”hŒ #VE generated”…””}”(hj¸h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÛhj´ubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjšh²hh³hÇh´Nubj)”}”(hŒ "Just works" ”h]”hÞ)”}”(hŒ "Just works"”h]”hŒ“Just works—…””}”(hjÏh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÜhjËubah}”(h]”h ]”h"]”h$]”h&]”uh1jhjšh²hh³hÇh´Nubeh}”(h]”h ]”h"]”h$]”h&]”jÃjÄuh1jh³hÇh´KÚhj{h²hubhÞ)”}”(hŒ»In general, the #GP MSRs should not be used in guests. Their use likely indicates a bug in the guest. The guest may try to handle the #GP with a hypercall but it is unlikely to succeed.”h]”hŒ»In general, the #GP MSRs should not be used in guests. Their use likely indicates a bug in the guest. The guest may try to handle the #GP with a hypercall but it is unlikely to succeed.”…””}”(hjéh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KÞhj{h²hubhÞ)”}”(hŒ‚The #VE MSRs are typically able to be handled by the hypervisor. Guests can make a hypercall to the hypervisor to handle the #VE.”h]”hŒ‚The #VE MSRs are typically able to be handled by the hypervisor. Guests can make a hypercall to the hypervisor to handle the #VE.”…””}”(hj÷h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Kâhj{h²hubhÞ)”}”(hXThe "just works" MSRs do not need any special guest handling. They might be implemented by directly passing through the MSR to the hardware or by trapping and handling in the TDX module. Other than possibly being slow, these MSRs appear to function just as they would on bare metal.”h]”hX The “just works†MSRs do not need any special guest handling. They might be implemented by directly passing through the MSR to the hardware or by trapping and handling in the TDX module. Other than possibly being slow, these MSRs appear to function just as they would on bare metal.”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Kåhj{h²hubeh}”(h]”Œrdmsr-wrmsr-behavior”ah ]”h"]”Œrdmsr/wrmsr behavior”ah$]”h&]”uh1hÈhjÖh²hh³hÇh´KÖubhÉ)”}”(hhh]”(hÎ)”}”(hŒCPUID Behavior”h]”hŒCPUID Behavior”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´KëubhÞ)”}”(hŒêFor some CPUID leaves and sub-leaves, the virtualized bit fields of CPUID return values (in guest EAX/EBX/ECX/EDX) are configurable by the hypervisor. For such cases, the Intel TDX module architecture defines two virtualization types:”h]”hŒêFor some CPUID leaves and sub-leaves, the virtualized bit fields of CPUID return values (in guest EAX/EBX/ECX/EDX) are configurable by the hypervisor. For such cases, the Intel TDX module architecture defines two virtualization types:”…””}”(hj,h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Kíhjh²hubj)”}”(hhh]”(j)”}”(hŒMBit fields for which the hypervisor controls the value seen by the guest TD. ”h]”hÞ)”}”(hŒLBit fields for which the hypervisor controls the value seen by the guest TD.”h]”hŒLBit fields for which the hypervisor controls the value seen by the guest TD.”…””}”(hjAh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Kòhj=ubah}”(h]”h ]”h"]”h$]”h&]”uh1jhj:h²hh³hÇh´Nubj)”}”(hŒçBit fields for which the hypervisor configures the value such that the guest TD either sees their native value or a value of 0. For these bit fields, the hypervisor can mask off the native values, but it can not turn *on* values. ”h]”hÞ)”}”(hŒæBit fields for which the hypervisor configures the value such that the guest TD either sees their native value or a value of 0. For these bit fields, the hypervisor can mask off the native values, but it can not turn *on* values.”h]”(hŒÚBit fields for which the hypervisor configures the value such that the guest TD either sees their native value or a value of 0. For these bit fields, the hypervisor can mask off the native values, but it can not turn ”…””}”(hjYh²hh³Nh´NubhŒemphasis”“”)”}”(hŒ*on*”h]”hŒon”…””}”(hjch²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jahjYubhŒ values.”…””}”(hjYh²hh³Nh´Nubeh}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KõhjUubah}”(h]”h ]”h"]”h$]”h&]”uh1jhj:h²hh³hÇh´Nubeh}”(h]”h ]”h"]”h$]”h&]”jÃjÄuh1jh³hÇh´Kòhjh²hubhÞ)”}”(hŒ«A #VE is generated for CPUID leaves and sub-leaves that the TDX module does not know how to handle. The guest kernel may ask the hypervisor for the value with a hypercall.”h]”hŒ«A #VE is generated for CPUID leaves and sub-leaves that the TDX module does not know how to handle. The guest kernel may ask the hypervisor for the value with a hypercall.”…””}”(hj‡h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Kúhjh²hubeh}”(h]”Œcpuid-behavior”ah ]”h"]”Œcpuid behavior”ah$]”h&]”uh1hÈhjÖh²hh³hÇh´Këubeh}”(h]”Œnew-tdx-exceptions”ah ]”h"]”Œnew tdx exceptions”ah$]”h&]”uh1hÈhj©h²hh³hÇh´K¶ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ#VE on Memory Accesses”h]”hŒ#VE on Memory Accesses”…””}”(hj¨h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj¥h²hh³hÇh´KÿubhÞ)”}”(hX There are essentially two classes of TDX memory: private and shared. Private memory receives full TDX protections. Its content is protected against access from the hypervisor. Shared memory is expected to be shared between guest and hypervisor and does not receive full TDX protections.”h]”hX There are essentially two classes of TDX memory: private and shared. Private memory receives full TDX protections. Its content is protected against access from the hypervisor. Shared memory is expected to be shared between guest and hypervisor and does not receive full TDX protections.”…””}”(hj¶h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mhj¥h²hubhÞ)”}”(hXA TD guest is in control of whether its memory accesses are treated as private or shared. It selects the behavior with a bit in its page table entries. This helps ensure that a guest does not place sensitive information in shared memory, exposing it to the untrusted hypervisor.”h]”hXA TD guest is in control of whether its memory accesses are treated as private or shared. It selects the behavior with a bit in its page table entries. This helps ensure that a guest does not place sensitive information in shared memory, exposing it to the untrusted hypervisor.”…””}”(hjÄh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mhj¥h²hubhÉ)”}”(hhh]”(hÎ)”}”(hŒ#VE on Shared Memory”h]”hŒ#VE on Shared Memory”…””}”(hjÕh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjÒh²hh³hÇh´M ubhÞ)”}”(hXiAccess to shared mappings can cause a #VE. The hypervisor ultimately controls whether a shared memory access causes a #VE, so the guest must be careful to only reference shared pages it can safely handle a #VE. For instance, the guest should be careful not to access shared memory in the #VE handler before it reads the #VE info structure (TDG.VP.VEINFO.GET).”h]”hXiAccess to shared mappings can cause a #VE. The hypervisor ultimately controls whether a shared memory access causes a #VE, so the guest must be careful to only reference shared pages it can safely handle a #VE. For instance, the guest should be careful not to access shared memory in the #VE handler before it reads the #VE info structure (TDG.VP.VEINFO.GET).”…””}”(hjãh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´MhjÒh²hubhÞ)”}”(hX“Shared mapping content is entirely controlled by the hypervisor. The guest should only use shared mappings for communicating with the hypervisor. Shared mappings must never be used for sensitive memory content like kernel stacks. A good rule of thumb is that hypervisor-shared memory should be treated the same as memory mapped to userspace. Both the hypervisor and userspace are completely untrusted.”h]”hX“Shared mapping content is entirely controlled by the hypervisor. The guest should only use shared mappings for communicating with the hypervisor. Shared mappings must never be used for sensitive memory content like kernel stacks. A good rule of thumb is that hypervisor-shared memory should be treated the same as memory mapped to userspace. Both the hypervisor and userspace are completely untrusted.”…””}”(hjñh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´MhjÒh²hubhÞ)”}”(hŒ¢MMIO for virtual devices is implemented as shared memory. The guest must be careful not to access device MMIO regions unless it is also prepared to handle a #VE.”h]”hŒ¢MMIO for virtual devices is implemented as shared memory. The guest must be careful not to access device MMIO regions unless it is also prepared to handle a #VE.”…””}”(hjÿh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´MhjÒh²hubeh}”(h]”Œve-on-shared-memory”ah ]”h"]”Œ#ve on shared memory”ah$]”h&]”uh1hÈhj¥h²hh³hÇh´M ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ#VE on Private Pages”h]”hŒ#VE on Private Pages”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´M!ubhÞ)”}”(hX8An access to private mappings can also cause a #VE. Since all kernel memory is also private memory, the kernel might theoretically need to handle a #VE on arbitrary kernel memory accesses. This is not feasible, so TDX guests ensure that all guest memory has been "accepted" before memory is used by the kernel.”h]”hX<An access to private mappings can also cause a #VE. Since all kernel memory is also private memory, the kernel might theoretically need to handle a #VE on arbitrary kernel memory accesses. This is not feasible, so TDX guests ensure that all guest memory has been “accepted†before memory is used by the kernel.”…””}”(hj&h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M#hjh²hubhÞ)”}”(hŒªA modest amount of memory (typically 512M) is pre-accepted by the firmware before the kernel runs to ensure that the kernel can start up without being subjected to a #VE.”h]”hŒªA modest amount of memory (typically 512M) is pre-accepted by the firmware before the kernel runs to ensure that the kernel can start up without being subjected to a #VE.”…””}”(hj4h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M)hjh²hubhÞ)”}”(hŒóThe hypervisor is permitted to unilaterally move accepted pages to a "blocked" state. However, if it does this, page access will not generate a #VE. It will, instead, cause a "TD Exit" where the hypervisor is required to handle the exception.”h]”hŒûThe hypervisor is permitted to unilaterally move accepted pages to a “blocked†state. However, if it does this, page access will not generate a #VE. It will, instead, cause a “TD Exit†where the hypervisor is required to handle the exception.”…””}”(hjBh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M-hjh²hubeh}”(h]”Œve-on-private-pages”ah ]”h"]”Œ#ve on private pages”ah$]”h&]”uh1hÈhj¥h²hh³hÇh´M!ubeh}”(h]”Œve-on-memory-accesses”ah ]”h"]”Œ#ve on memory accesses”ah$]”h&]”uh1hÈhj©h²hh³hÇh´KÿubhÉ)”}”(hhh]”(hÎ)”}”(hŒLinux #VE handler”h]”hŒLinux #VE handler”…””}”(hjch²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj`h²hh³hÇh´M3ubhÞ)”}”(hŒºJust like page faults or #GP's, #VE exceptions can be either handled or be fatal. Typically, an unhandled userspace #VE results in a SIGSEGV. An unhandled kernel #VE results in an oops.”h]”hŒ¼Just like page faults or #GP’s, #VE exceptions can be either handled or be fatal. Typically, an unhandled userspace #VE results in a SIGSEGV. An unhandled kernel #VE results in an oops.”…””}”(hjqh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M5hj`h²hubhÞ)”}”(hŒÿHandling nested exceptions on x86 is typically nasty business. A #VE could be interrupted by an NMI which triggers another #VE and hilarity ensues. The TDX #VE architecture anticipated this scenario and includes a feature to make it slightly less nasty.”h]”hŒÿHandling nested exceptions on x86 is typically nasty business. A #VE could be interrupted by an NMI which triggers another #VE and hilarity ensues. The TDX #VE architecture anticipated this scenario and includes a feature to make it slightly less nasty.”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M9hj`h²hubhÞ)”}”(hŒüDuring #VE handling, the TDX module ensures that all interrupts (including NMIs) are blocked. The block remains in place until the guest makes a TDG.VP.VEINFO.GET TDCALL. This allows the guest to control when interrupts or a new #VE can be delivered.”h]”hŒüDuring #VE handling, the TDX module ensures that all interrupts (including NMIs) are blocked. The block remains in place until the guest makes a TDG.VP.VEINFO.GET TDCALL. This allows the guest to control when interrupts or a new #VE can be delivered.”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M>hj`h²hubhÞ)”}”(hŒìHowever, the guest kernel must still be careful to avoid potential #VE-triggering actions (discussed above) while this block is in place. While the block is in place, any #VE is elevated to a double fault (#DF) which is not recoverable.”h]”hŒìHowever, the guest kernel must still be careful to avoid potential #VE-triggering actions (discussed above) while this block is in place. While the block is in place, any #VE is elevated to a double fault (#DF) which is not recoverable.”…””}”(hj›h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´MChj`h²hubeh}”(h]”Œlinux-ve-handler”ah ]”h"]”Œlinux #ve handler”ah$]”h&]”uh1hÈhj©h²hh³hÇh´M3ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ MMIO handling”h]”hŒ MMIO handling”…””}”(hj´h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj±h²hh³hÇh´MIubhÞ)”}”(hXRIn non-TDX VMs, MMIO is usually implemented by giving a guest access to a mapping which will cause a VMEXIT on access, and then the hypervisor emulates the access. That is not possible in TDX guests because VMEXIT will expose the register state to the host. TDX guests don't trust the host and can't have their state exposed to the host.”h]”hXVIn non-TDX VMs, MMIO is usually implemented by giving a guest access to a mapping which will cause a VMEXIT on access, and then the hypervisor emulates the access. That is not possible in TDX guests because VMEXIT will expose the register state to the host. TDX guests don’t trust the host and can’t have their state exposed to the host.”…””}”(hjÂh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´MKhj±h²hubhÞ)”}”(hŒóIn TDX, MMIO regions typically trigger a #VE exception in the guest. The guest #VE handler then emulates the MMIO instruction inside the guest and converts it into a controlled TDCALL to the host, rather than exposing guest state to the host.”h]”hŒóIn TDX, MMIO regions typically trigger a #VE exception in the guest. The guest #VE handler then emulates the MMIO instruction inside the guest and converts it into a controlled TDCALL to the host, rather than exposing guest state to the host.”…””}”(hjÐh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´MQhj±h²hubhÞ)”}”(hXMMIO addresses on x86 are just special physical addresses. They can theoretically be accessed with any instruction that accesses memory. However, the kernel instruction decoding method is limited. It is only designed to decode instructions like those generated by io.h macros.”h]”hXMMIO addresses on x86 are just special physical addresses. They can theoretically be accessed with any instruction that accesses memory. However, the kernel instruction decoding method is limited. It is only designed to decode instructions like those generated by io.h macros.”…””}”(hjÞh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´MVhj±h²hubhÞ)”}”(hŒLMMIO access via other means (like structure overlays) may result in an oops.”h]”hŒLMMIO access via other means (like structure overlays) may result in an oops.”…””}”(hjìh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M[hj±h²hubeh}”(h]”Œ mmio-handling”ah ]”h"]”Œ mmio handling”ah$]”h&]”uh1hÈhj©h²hh³hÇh´MIubhÉ)”}”(hhh]”(hÎ)”}”(hŒShared Memory Conversions”h]”hŒShared Memory Conversions”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´M_ubhÞ)”}”(hXWAll TDX guest memory starts out as private at boot. This memory can not be accessed by the hypervisor. However, some kernel users like device drivers might have a need to share data with the hypervisor. To do this, memory must be converted between shared and private. This can be accomplished using some existing memory encryption helpers:”h]”hXWAll TDX guest memory starts out as private at boot. This memory can not be accessed by the hypervisor. However, some kernel users like device drivers might have a need to share data with the hypervisor. To do this, memory must be converted between shared and private. This can be accomplished using some existing memory encryption helpers:”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mahjh²hubhŒ block_quote”“”)”}”(hŒx* set_memory_decrypted() converts a range of pages to shared. * set_memory_encrypted() converts memory back to private. ”h]”j)”}”(hhh]”(j)”}”(hŒ;set_memory_decrypted() converts a range of pages to shared.”h]”hÞ)”}”(hj,h]”hŒ;set_memory_decrypted() converts a range of pages to shared.”…””}”(hj.h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mghj*ubah}”(h]”h ]”h"]”h$]”h&]”uh1jhj'ubj)”}”(hŒ8set_memory_encrypted() converts memory back to private. ”h]”hÞ)”}”(hŒ7set_memory_encrypted() converts memory back to private.”h]”hŒ7set_memory_encrypted() converts memory back to private.”…””}”(hjEh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´MhhjAubah}”(h]”h ]”h"]”h$]”h&]”uh1jhj'ubeh}”(h]”h ]”h"]”h$]”h&]”jÃŒ*”uh1jh³hÇh´Mghj#ubah}”(h]”h ]”h"]”h$]”h&]”uh1j!h³hÇh´Mghjh²hubhÞ)”}”(hŒœDevice drivers are the primary user of shared memory, but there's no need to touch every driver. DMA buffers and ioremap() do the conversions automatically.”h]”hŒžDevice drivers are the primary user of shared memory, but there’s no need to touch every driver. DMA buffers and ioremap() do the conversions automatically.”…””}”(hjfh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mjhjh²hubhÞ)”}”(hŒ]TDX uses SWIOTLB for most DMA allocations. The SWIOTLB buffer is converted to shared on boot.”h]”hŒ]TDX uses SWIOTLB for most DMA allocations. The SWIOTLB buffer is converted to shared on boot.”…””}”(hjth²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mnhjh²hubhÞ)”}”(hŒxFor coherent DMA allocation, the DMA buffer gets converted on the allocation. Check force_dma_unencrypted() for details.”h]”hŒxFor coherent DMA allocation, the DMA buffer gets converted on the allocation. Check force_dma_unencrypted() for details.”…””}”(hj‚h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mqhjh²hubeh}”(h]”Œshared-memory-conversions”ah ]”h"]”Œshared memory conversions”ah$]”h&]”uh1hÈhj©h²hh³hÇh´M_ubeh}”(h]”Œtdx-guest-support”ah ]”h"]”Œtdx guest support”ah$]”h&]”uh1hÈhhÊh²hh³hÇh´K«ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ Attestation”h]”hŒ Attestation”…””}”(hj£h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj h²hh³hÇh´MuubhÞ)”}”(hX2Attestation is used to verify the TDX guest trustworthiness to other entities before provisioning secrets to the guest. For example, a key server may want to use attestation to verify that the guest is the desired one before releasing the encryption keys to mount the encrypted rootfs or a secondary drive.”h]”hX2Attestation is used to verify the TDX guest trustworthiness to other entities before provisioning secrets to the guest. For example, a key server may want to use attestation to verify that the guest is the desired one before releasing the encryption keys to mount the encrypted rootfs or a secondary drive.”…””}”(hj±h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mwhj h²hubhÞ)”}”(hXŽThe TDX module records the state of the TDX guest in various stages of the guest boot process using the build time measurement register (MRTD) and runtime measurement registers (RTMR). Measurements related to the guest initial configuration and firmware image are recorded in the MRTD register. Measurements related to initial state, kernel image, firmware image, command line options, initrd, ACPI tables, etc are recorded in RTMR registers. For more details, as an example, please refer to TDX Virtual Firmware design specification, section titled "TD Measurement". At TDX guest runtime, the attestation process is used to attest to these measurements.”h]”hX’The TDX module records the state of the TDX guest in various stages of the guest boot process using the build time measurement register (MRTD) and runtime measurement registers (RTMR). Measurements related to the guest initial configuration and firmware image are recorded in the MRTD register. Measurements related to initial state, kernel image, firmware image, command line options, initrd, ACPI tables, etc are recorded in RTMR registers. For more details, as an example, please refer to TDX Virtual Firmware design specification, section titled “TD Measurementâ€. At TDX guest runtime, the attestation process is used to attest to these measurements.”…””}”(hj¿h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M}hj h²hubhÞ)”}”(hŒXThe attestation process consists of two steps: TDREPORT generation and Quote generation.”h]”hŒXThe attestation process consists of two steps: TDREPORT generation and Quote generation.”…””}”(hjÍh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Mˆhj h²hubhÞ)”}”(hXuTDX guest uses TDCALL[TDG.MR.REPORT] to get the TDREPORT (TDREPORT_STRUCT) from the TDX module. TDREPORT is a fixed-size data structure generated by the TDX module which contains guest-specific information (such as build and boot measurements), platform security version, and the MAC to protect the integrity of the TDREPORT. A user-provided 64-Byte REPORTDATA is used as input and included in the TDREPORT. Typically it can be some nonce provided by attestation service so the TDREPORT can be verified uniquely. More details about the TDREPORT can be found in Intel TDX Module specification, section titled "TDG.MR.REPORT Leaf".”h]”hXyTDX guest uses TDCALL[TDG.MR.REPORT] to get the TDREPORT (TDREPORT_STRUCT) from the TDX module. TDREPORT is a fixed-size data structure generated by the TDX module which contains guest-specific information (such as build and boot measurements), platform security version, and the MAC to protect the integrity of the TDREPORT. A user-provided 64-Byte REPORTDATA is used as input and included in the TDREPORT. Typically it can be some nonce provided by attestation service so the TDREPORT can be verified uniquely. More details about the TDREPORT can be found in Intel TDX Module specification, section titled “TDG.MR.REPORT Leafâ€.”…””}”(hjÛh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M‹hj h²hubhÞ)”}”(hXcAfter getting the TDREPORT, the second step of the attestation process is to send it to the Quoting Enclave (QE) to generate the Quote. TDREPORT by design can only be verified on the local platform as the MAC key is bound to the platform. To support remote verification of the TDREPORT, TDX leverages Intel SGX Quoting Enclave to verify the TDREPORT locally and convert it to a remotely verifiable Quote. Method of sending TDREPORT to QE is implementation specific. Attestation software can choose whatever communication channel available (i.e. vsock or TCP/IP) to send the TDREPORT to QE and receive the Quote.”h]”hXcAfter getting the TDREPORT, the second step of the attestation process is to send it to the Quoting Enclave (QE) to generate the Quote. TDREPORT by design can only be verified on the local platform as the MAC key is bound to the platform. To support remote verification of the TDREPORT, TDX leverages Intel SGX Quoting Enclave to verify the TDREPORT locally and convert it to a remotely verifiable Quote. Method of sending TDREPORT to QE is implementation specific. Attestation software can choose whatever communication channel available (i.e. vsock or TCP/IP) to send the TDREPORT to QE and receive the Quote.”…””}”(hjéh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M•hj h²hubeh}”(h]”Œ attestation”ah ]”h"]”Œ attestation”ah$]”h&]”uh1hÈhhÊh²hh³hÇh´MuubhÉ)”}”(hhh]”(hÎ)”}”(hŒ References”h]”hŒ References”…””}”(hj h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjÿh²hh³hÇh´M ubhÞ)”}”(hŒ)TDX reference material is collected here:”h]”hŒ)TDX reference material is collected here:”…””}”(hj h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M¢hjÿh²hubhÞ)”}”(hŒghttps://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html”h]”hŒ reference”“”)”}”(hj h]”hŒghttps://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html”…””}”(hj$ h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”Œrefuri”j uh1j" hj ubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´M¤hjÿh²hubeh}”(h]”Œ references”ah ]”h"]”Œ references”ah$]”h&]”uh1hÈhhÊh²hh³hÇh´M ubeh}”(h]”Œ!intel-trust-domain-extensions-tdx”ah ]”h"]”Œ#intel trust domain extensions (tdx)”ah$]”h&]”uh1hÈhhh²hh³hÇh´Kubeh}”(h]”h ]”h"]”h$]”h&]”Œsource”hÇuh1hŒcurrent_source”NŒ current_line”NŒsettings”Œdocutils.frontend”ŒValues”“”)”}”(hÍNŒ generator”NŒ datestamp”NŒ source_link”NŒ source_url”NŒ toc_backlinks”Œentry”Œfootnote_backlinks”KŒ sectnum_xform”KŒstrip_comments”NŒstrip_elements_with_classes”NŒ strip_classes”NŒ report_level”KŒ halt_level”KŒexit_status_level”KŒdebug”NŒwarning_stream”NŒ traceback”ˆŒinput_encoding”Œ utf-8-sig”Œinput_encoding_error_handler”Œstrict”Œoutput_encoding”Œutf-8”Œoutput_encoding_error_handler”jk Œerror_encoding”Œutf-8”Œerror_encoding_error_handler”Œbackslashreplace”Œ language_code”Œen”Œrecord_dependencies”NŒconfig”NŒ id_prefix”hŒauto_id_prefix”Œid”Œ dump_settings”NŒdump_internals”NŒdump_transforms”NŒdump_pseudo_xml”NŒexpose_internals”NŒstrict_visitor”NŒ_disable_config”NŒ_source”hÇŒ _destination”NŒ _config_files”]”Œ7/var/lib/git/docbuild/linux/Documentation/docutils.conf”aŒfile_insertion_enabled”ˆŒ raw_enabled”KŒline_length_limit”M'Œpep_references”NŒ pep_base_url”Œhttps://peps.python.org/”Œpep_file_url_template”Œpep-%04d”Œrfc_references”NŒ rfc_base_url”Œ&https://datatracker.ietf.org/doc/html/”Œ tab_width”KŒtrim_footnote_reference_space”‰Œsyntax_highlight”Œlong”Œ smart_quotes”ˆŒsmartquotes_locales”]”Œcharacter_level_inline_markup”‰Œdoctitle_xform”‰Œ docinfo_xform”KŒsectsubtitle_xform”‰Œ image_loading”Œlink”Œembed_stylesheet”‰Œcloak_email_addresses”ˆŒsection_self_link”‰Œenv”NubŒreporter”NŒindirect_targets”]”Œsubstitution_defs”}”Œsubstitution_names”}”Œrefnames”}”Œrefids”}”Œnameids”}”(jE jB j¦j£j\jYjjþjžj›jGjDjnjkj±j®jjÿj)j&j–j“jjšj¢jŸjÊjÇjxjujjjšj—j]jZjjjUjRj®j«jÿjüj•j’jüjùj= j: uŒ nametypes”}”(jE ‰j¦‰j\‰j‰jž‰jG‰jn‰j±‰j‰j)‰j–‰j‰j¢‰jʉjx‰j‰jš‰j]‰j‰jU‰j®‰jÿ‰j•‰jü‰j= ‰uh}”(jB hÊj£híjYj(jþj_j›jjDjjkjJj®jqjÿj´j&jj“j,jšj©jŸjÖjÇjjujÍjj{j—jjZj¥jjÒjRjj«j`jüj±j’jjùj j: jÿuŒ footnote_refs”}”Œ citation_refs”}”Œ autofootnotes”]”Œautofootnote_refs”]”Œsymbol_footnotes”]”Œsymbol_footnote_refs”]”Œ footnotes”]”Œ citations”]”Œautofootnote_start”KŒsymbol_footnote_start”KŒ id_counter”Œ collections”ŒCounter”“”}”…”R”Œparse_messages”]”Œtransform_messages”]”Œ transformer”NŒ include_log”]”Œ decoration”Nh²hub.