sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget /translations/zh_CN/arch/x86/tdxmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/zh_TW/arch/x86/tdxmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/it_IT/arch/x86/tdxmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/ja_JP/arch/x86/tdxmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/ko_KR/arch/x86/tdxmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/sp_SP/arch/x86/tdxmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhcomment)}(h SPDX-License-Identifier: GPL-2.0h]h SPDX-License-Identifier: GPL-2.0}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhhh:/var/lib/git/docbuild/linux/Documentation/arch/x86/tdx.rsthKubhsection)}(hhh](htitle)}(h#Intel Trust Domain Extensions (TDX)h]h#Intel Trust Domain Extensions (TDX)}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh paragraph)}(hX/Intel's Trust Domain Extensions (TDX) protect confidential guest VMs from the host and physical attacks by isolating the guest register state and by encrypting the guest memory. In TDX, a special module running in a special mode sits between the host and the guest and manages the guest/host separation.h]hX1Intel’s Trust Domain Extensions (TDX) protect confidential guest VMs from the host and physical attacks by isolating the guest register state and by encrypting the guest memory. In TDX, a special module running in a special mode sits between the host and the guest and manages the guest/host separation.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hTDX Host Kernel Supporth]hTDX Host Kernel Support}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hX%TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM) and a new isolated range pointed by the SEAM Ranger Register (SEAMRR). A CPU-attested software module called 'the TDX module' runs inside the new isolated range to provide the functionalities to manage and run protected VMs.h]hX)TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM) and a new isolated range pointed by the SEAM Ranger Register (SEAMRR). A CPU-attested software module called ‘the TDX module’ runs inside the new isolated range to provide the functionalities to manage and run protected VMs.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hXTDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs as TDX private KeyIDs, which are only accessible within the SEAM mode. BIOS is responsible for partitioning legacy MKTME KeyIDs and TDX KeyIDs.h]hXTDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs as TDX private KeyIDs, which are only accessible within the SEAM mode. BIOS is responsible for partitioning legacy MKTME KeyIDs and TDX KeyIDs.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hBefore the TDX module can be used to create and run protected VMs, it must be loaded into the isolated range and properly initialized. The TDX architecture doesn't require the BIOS to load the TDX module, but the kernel assumes it is loaded by the BIOS.h]hXBefore the TDX module can be used to create and run protected VMs, it must be loaded into the isolated range and properly initialized. The TDX architecture doesn’t require the BIOS to load the TDX module, but the kernel assumes it is loaded by the BIOS.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hTDX boot-time detectionh]hTDX boot-time detection}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK!ubh)}(h{The kernel detects TDX by detecting TDX private KeyIDs during kernel boot. Below dmesg shows when TDX is enabled by BIOS::h]hzThe kernel detects TDX by detecting TDX private KeyIDs during kernel boot. Below dmesg shows when TDX is enabled by BIOS:}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK#hjhhubh literal_block)}(h:[..] virt/tdx: BIOS enabled: private KeyID range: [16, 64)h]h:[..] virt/tdx: BIOS enabled: private KeyID range: [16, 64)}hj5sbah}(h]h ]h"]h$]h&]hhuh1j3hhhK&hjhhubeh}(h]tdx-boot-time-detectionah ]h"]tdx boot-time detectionah$]h&]uh1hhhhhhhhK!ubh)}(hhh](h)}(hTDX module initializationh]hTDX module initialization}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjKhhhhhK)ubh)}(hThe kernel talks to the TDX module via the new SEAMCALL instruction. The TDX module implements SEAMCALL leaf functions to allow the kernel to initialize it.h]hThe kernel talks to the TDX module via the new SEAMCALL instruction. The TDX module implements SEAMCALL leaf functions to allow the kernel to initialize it.}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK+hjKhhubh)}(hIf the TDX module isn't loaded, the SEAMCALL instruction fails with a special error. In this case the kernel fails the module initialization and reports the module isn't loaded::h]hIf the TDX module isn’t loaded, the SEAMCALL instruction fails with a special error. In this case the kernel fails the module initialization and reports the module isn’t loaded:}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK/hjKhhubj4)}(h [..] virt/tdx: module not loadedh]h [..] virt/tdx: module not loaded}hjxsbah}(h]h ]h"]h$]h&]hhuh1j3hhhK3hjKhhubh)}(hX'Initializing the TDX module consumes roughly ~1/256th system RAM size to use it as 'metadata' for the TDX memory. It also takes additional CPU time to initialize those metadata along with the TDX module itself. Both are not trivial. The kernel initializes the TDX module at runtime on demand.h]hX+Initializing the TDX module consumes roughly ~1/256th system RAM size to use it as ‘metadata’ for the TDX memory. It also takes additional CPU time to initialize those metadata along with the TDX module itself. Both are not trivial. The kernel initializes the TDX module at runtime on demand.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK5hjKhhubh)}(hBesides initializing the TDX module, a per-cpu initialization SEAMCALL must be done on one cpu before any other SEAMCALLs can be made on that cpu.h]hBesides initializing the TDX module, a per-cpu initialization SEAMCALL must be done on one cpu before any other SEAMCALLs can be made on that cpu.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK;hjKhhubh)}(hThe kernel provides two functions, tdx_enable() and tdx_cpu_enable() to allow the user of TDX to enable the TDX module and enable TDX on local cpu respectively.h]hThe kernel provides two functions, tdx_enable() and tdx_cpu_enable() to allow the user of TDX to enable the TDX module and enable TDX on local cpu respectively.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK?hjKhhubh)}(hMaking SEAMCALL requires VMXON has been done on that CPU. Currently only KVM implements VMXON. For now both tdx_enable() and tdx_cpu_enable() don't do VMXON internally (not trivial), but depends on the caller to guarantee that.h]hMaking SEAMCALL requires VMXON has been done on that CPU. Currently only KVM implements VMXON. For now both tdx_enable() and tdx_cpu_enable() don’t do VMXON internally (not trivial), but depends on the caller to guarantee that.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKChjKhhubh)}(hTo enable TDX, the caller of TDX should: 1) temporarily disable CPU hotplug; 2) do VMXON and tdx_enable_cpu() on all online cpus; 3) call tdx_enable(). For example::h]hTo enable TDX, the caller of TDX should: 1) temporarily disable CPU hotplug; 2) do VMXON and tdx_enable_cpu() on all online cpus; 3) call tdx_enable(). For example:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKHhjKhhubj4)}(hcpus_read_lock(); on_each_cpu(vmxon_and_tdx_cpu_enable()); ret = tdx_enable(); cpus_read_unlock(); if (ret) goto no_tdx; // TDX is ready to useh]hcpus_read_lock(); on_each_cpu(vmxon_and_tdx_cpu_enable()); ret = tdx_enable(); cpus_read_unlock(); if (ret) goto no_tdx; // TDX is ready to use}hjsbah}(h]h ]h"]h$]h&]hhuh1j3hhhKLhjKhhubh)}(hXAnd the caller of TDX must guarantee the tdx_cpu_enable() has been successfully done on any cpu before it wants to run any other SEAMCALL. A typical usage is do both VMXON and tdx_cpu_enable() in CPU hotplug online callback, and refuse to online if tdx_cpu_enable() fails.h]hXAnd the caller of TDX must guarantee the tdx_cpu_enable() has been successfully done on any cpu before it wants to run any other SEAMCALL. A typical usage is do both VMXON and tdx_cpu_enable() in CPU hotplug online callback, and refuse to online if tdx_cpu_enable() fails.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKThjKhhubh)}(hJUser can consult dmesg to see whether the TDX module has been initialized.h]hJUser can consult dmesg to see whether the TDX module has been initialized.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKYhjKhhubh)}(hQIf the TDX module is initialized successfully, dmesg shows something like below::h]hPIf the TDX module is initialized successfully, dmesg shows something like below:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK[hjKhhubj4)}(hN[..] virt/tdx: 262668 KBs allocated for PAMT [..] virt/tdx: module initializedh]hN[..] virt/tdx: 262668 KBs allocated for PAMT [..] virt/tdx: module initialized}hjsbah}(h]h ]h"]h$]h&]hhuh1j3hhhK^hjKhhubh)}(hRIf the TDX module failed to initialize, dmesg also shows it failed to initialize::h]hQIf the TDX module failed to initialize, dmesg also shows it failed to initialize:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKahjKhhubj4)}(h/[..] virt/tdx: module initialization failed ...h]h/[..] virt/tdx: module initialization failed ...}hj sbah}(h]h ]h"]h$]h&]hhuh1j3hhhKdhjKhhubeh}(h]tdx-module-initializationah ]h"]tdx module initializationah$]h&]uh1hhhhhhhhK)ubh)}(hhh](h)}(h*TDX Interaction to Other Kernel Componentsh]h*TDX Interaction to Other Kernel Components}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj6hhhhhKgubh)}(hhh](h)}(hTDX Memory Policyh]hTDX Memory Policy}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjGhhhhhKjubh)}(hXGTDX reports a list of "Convertible Memory Region" (CMR) to tell the kernel which memory is TDX compatible. The kernel needs to build a list of memory regions (out of CMRs) as "TDX-usable" memory and pass those regions to the TDX module. Once this is done, those "TDX-usable" memory regions are fixed during module's lifetime.h]hXUTDX reports a list of “Convertible Memory Region” (CMR) to tell the kernel which memory is TDX compatible. The kernel needs to build a list of memory regions (out of CMRs) as “TDX-usable” memory and pass those regions to the TDX module. Once this is done, those “TDX-usable” memory regions are fixed during module’s lifetime.}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKlhjGhhubh)}(hX8To keep things simple, currently the kernel simply guarantees all pages in the page allocator are TDX memory. Specifically, the kernel uses all system memory in the core-mm "at the time of TDX module initialization" as TDX memory, and in the meantime, refuses to online any non-TDX-memory in the memory hotplug.h]hX<To keep things simple, currently the kernel simply guarantees all pages in the page allocator are TDX memory. Specifically, the kernel uses all system memory in the core-mm “at the time of TDX module initialization” as TDX memory, and in the meantime, refuses to online any non-TDX-memory in the memory hotplug.}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKrhjGhhubeh}(h]tdx-memory-policyah ]h"]tdx memory policyah$]h&]uh1hhj6hhhhhKjubh)}(hhh](h)}(hPhysical Memory Hotplugh]hPhysical Memory Hotplug}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj|hhhhhKyubh)}(hX Note TDX assumes convertible memory is always physically present during machine's runtime. A non-buggy BIOS should never support hot-removal of any convertible memory. This implementation doesn't handle ACPI memory removal but depends on the BIOS to behave correctly.h]hXNote TDX assumes convertible memory is always physically present during machine’s runtime. A non-buggy BIOS should never support hot-removal of any convertible memory. This implementation doesn’t handle ACPI memory removal but depends on the BIOS to behave correctly.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hj|hhubeh}(h]physical-memory-hotplugah ]h"]physical memory hotplugah$]h&]uh1hhj6hhhhhKyubh)}(hhh](h)}(h CPU Hotplugh]h CPU Hotplug}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hTDX module requires the per-cpu initialization SEAMCALL must be done on one cpu before any other SEAMCALLs can be made on that cpu. The kernel provides tdx_cpu_enable() to let the user of TDX to do it when the user wants to use a new cpu for TDX task.h]hTDX module requires the per-cpu initialization SEAMCALL must be done on one cpu before any other SEAMCALLs can be made on that cpu. The kernel provides tdx_cpu_enable() to let the user of TDX to do it when the user wants to use a new cpu for TDX task.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXNTDX doesn't support physical (ACPI) CPU hotplug. During machine boot, TDX verifies all boot-time present logical CPUs are TDX compatible before enabling TDX. A non-buggy BIOS should never support hot-add/removal of physical CPU. Currently the kernel doesn't handle physical CPU hotplug, but depends on the BIOS to behave correctly.h]hXRTDX doesn’t support physical (ACPI) CPU hotplug. During machine boot, TDX verifies all boot-time present logical CPUs are TDX compatible before enabling TDX. A non-buggy BIOS should never support hot-add/removal of physical CPU. Currently the kernel doesn’t handle physical CPU hotplug, but depends on the BIOS to behave correctly.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hxNote TDX works with CPU logical online/offline, thus the kernel still allows to offline logical CPU and online it again.h]hxNote TDX works with CPU logical online/offline, thus the kernel still allows to offline logical CPU and online it again.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] cpu-hotplugah ]h"] cpu hotplugah$]h&]uh1hhj6hhhhhKubh)}(hhh](h)}(hKexec()h]hKexec()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hTDX host support currently lacks the ability to handle kexec. For simplicity only one of them can be enabled in the Kconfig. This will be fixed in the future.h]hTDX host support currently lacks the ability to handle kexec. For simplicity only one of them can be enabled in the Kconfig. This will be fixed in the future.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]kexecah ]h"]kexec()ah$]h&]uh1hhj6hhhhhKubh)}(hhh](h)}(hErratumh]hErratum}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhKubh)}(hThe first few generations of TDX hardware have an erratum. A partial write to a TDX private memory cacheline will silently "poison" the line. Subsequent reads will consume the poison and generate a machine check.h]hThe first few generations of TDX hardware have an erratum. A partial write to a TDX private memory cacheline will silently “poison” the line. Subsequent reads will consume the poison and generate a machine check.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubh)}(hXA partial write is a memory write where a write transaction of less than cacheline lands at the memory controller. The CPU does these via non-temporal write instructions (like MOVNTI), or through UC/WC memory mappings. Devices can also do partial writes via DMA.h]hXA partial write is a memory write where a write transaction of less than cacheline lands at the memory controller. The CPU does these via non-temporal write instructions (like MOVNTI), or through UC/WC memory mappings. Devices can also do partial writes via DMA.}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubh)}(hXTheoretically, a kernel bug could do partial write to TDX private memory and trigger unexpected machine check. What's more, the machine check code will present these as "Hardware error" when they were, in fact, a software-triggered issue. But in the end, this issue is hard to trigger.h]hX%Theoretically, a kernel bug could do partial write to TDX private memory and trigger unexpected machine check. What’s more, the machine check code will present these as “Hardware error” when they were, in fact, a software-triggered issue. But in the end, this issue is hard to trigger.}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubh)}(hIf the platform has such erratum, the kernel prints additional message in machine check handler to tell user the machine check may be caused by kernel bug on TDX private memory.h]hIf the platform has such erratum, the kernel prints additional message in machine check handler to tell user the machine check may be caused by kernel bug on TDX private memory.}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj hhubeh}(h]erratumah ]h"]erratumah$]h&]uh1hhj6hhhhhKubh)}(hhh](h)}(h#Interaction vs S3 and deeper statesh]h#Interaction vs S3 and deeper states}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhj^hhhhhKubh)}(hTDX cannot survive from S3 and deeper states. The hardware resets and disables TDX completely when platform goes to S3 and deeper. Both TDX guests and the TDX module get destroyed permanently.h]hTDX cannot survive from S3 and deeper states. The hardware resets and disables TDX completely when platform goes to S3 and deeper. Both TDX guests and the TDX module get destroyed permanently.}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj^hhubh)}(hThe kernel uses S3 for suspend-to-ram, and use S4 and deeper states for hibernation. Currently, for simplicity, the kernel chooses to make TDX mutually exclusive with S3 and hibernation.h]hThe kernel uses S3 for suspend-to-ram, and use S4 and deeper states for hibernation. Currently, for simplicity, the kernel chooses to make TDX mutually exclusive with S3 and hibernation.}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj^hhubh)}(hQThe kernel disables TDX during early boot when hibernation support is available::h]hPThe kernel disables TDX during early boot when hibernation support is available:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj^hhubj4)}(hD[..] virt/tdx: initialization failed: Hibernation support is enabledh]hD[..] virt/tdx: initialization failed: Hibernation support is enabled}hjsbah}(h]h ]h"]h$]h&]hhuh1j3hhhKhj^hhubh)}(hQAdd 'nohibernate' kernel command line to disable hibernation in order to use TDX.h]hUAdd ‘nohibernate’ kernel command line to disable hibernation in order to use TDX.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj^hhubh)}(hACPI S3 is disabled during kernel early boot if TDX is enabled. The user needs to turn off TDX in the BIOS in order to use S3.h]hACPI S3 is disabled during kernel early boot if TDX is enabled. The user needs to turn off TDX in the BIOS in order to use S3.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj^hhubeh}(h]#interaction-vs-s3-and-deeper-statesah ]h"]#interaction vs s3 and deeper statesah$]h&]uh1hhj6hhhhhKubeh}(h]*tdx-interaction-to-other-kernel-componentsah ]h"]*tdx interaction to other kernel componentsah$]h&]uh1hhhhhhhhKgubeh}(h]tdx-host-kernel-supportah ]h"]tdx host kernel supportah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hTDX Guest Supporth]hTDX Guest Support}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXLSince the host cannot directly access guest registers or memory, much normal functionality of a hypervisor must be moved into the guest. This is implemented using a Virtualization Exception (#VE) that is handled by the guest kernel. A #VE is handled entirely inside the guest kernel, but some require the hypervisor to be consulted.h]hXLSince the host cannot directly access guest registers or memory, much normal functionality of a hypervisor must be moved into the guest. This is implemented using a Virtualization Exception (#VE) that is handled by the guest kernel. A #VE is handled entirely inside the guest kernel, but some require the hypervisor to be consulted.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hpTDX includes new hypercall-like mechanisms for communicating from the guest to the hypervisor or the TDX module.h]hpTDX includes new hypercall-like mechanisms for communicating from the guest to the hypervisor or the TDX module.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hhh](h)}(hNew TDX Exceptionsh]hNew TDX Exceptions}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hTDX guests behave differently from bare-metal and traditional VMX guests. In TDX guests, otherwise normal instructions or memory accesses can cause #VE or #GP exceptions.h]hTDX guests behave differently from bare-metal and traditional VMX guests. In TDX guests, otherwise normal instructions or memory accesses can cause #VE or #GP exceptions.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hxInstructions marked with an '*' conditionally cause exceptions. The details for these instructions are discussed below.h]h|Instructions marked with an ‘*’ conditionally cause exceptions. The details for these instructions are discussed below.}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hhh](h)}(hInstruction-based #VEh]hInstruction-based #VE}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj5hhhhhKubh bullet_list)}(hhh](h list_item)}(hPort I/O (INS, OUTS, IN, OUT)h]h)}(hjOh]hPort I/O (INS, OUTS, IN, OUT)}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjMubah}(h]h ]h"]h$]h&]uh1jKhjHhhhhhNubjL)}(hHLTh]h)}(hjfh]hHLT}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjdubah}(h]h ]h"]h$]h&]uh1jKhjHhhhhhNubjL)}(hMONITOR, MWAITh]h)}(hj}h]hMONITOR, MWAIT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj{ubah}(h]h ]h"]h$]h&]uh1jKhjHhhhhhNubjL)}(h WBINVD, INVDh]h)}(hjh]h WBINVD, INVD}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjHhhhhhNubjL)}(hVMCALLh]h)}(hjh]hVMCALL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjHhhhhhNubjL)}(h RDMSR*,WRMSR*h]h)}(hjh]h RDMSR*,WRMSR*}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjHhhhhhNubjL)}(hCPUID* h]h)}(hCPUID*h]hCPUID*}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjHhhhhhNubeh}(h]h ]h"]h$]h&]bullet-uh1jFhhhKhj5hhubeh}(h]instruction-based-veah ]h"]instruction-based #veah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hInstruction-based #GPh]hInstruction-based #GP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubjG)}(hhh](jL)}(h|All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH, VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXONh]h)}(h|All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH, VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXONh]h|All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH, VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubjL)}(h ENCLS, ENCLUh]h)}(hj-h]h ENCLS, ENCLU}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj+ubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubjL)}(hGETSECh]h)}(hjDh]hGETSEC}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjBubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubjL)}(hRSMh]h)}(hj[h]hRSM}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjYubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubjL)}(hENQCMDh]h)}(hjrh]hENQCMD}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjpubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubjL)}(hRDMSR*,WRMSR* h]h)}(h RDMSR*,WRMSR*h]h RDMSR*,WRMSR*}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1jFhhhKhjhhubeh}(h]instruction-based-gpah ]h"]instruction-based #gpah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hRDMSR/WRMSR Behaviorh]hRDMSR/WRMSR Behavior}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(h0MSR access behavior falls into three categories:h]h0MSR access behavior falls into three categories:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubjG)}(hhh](jL)}(h #GP generatedh]h)}(hjh]h #GP generated}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubjL)}(h #VE generatedh]h)}(hjh]h #VE generated}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubjL)}(h "Just works" h]h)}(h "Just works"h]h“Just works”}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jKhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1jFhhhKhjhhubh)}(hIn general, the #GP MSRs should not be used in guests. Their use likely indicates a bug in the guest. The guest may try to handle the #GP with a hypercall but it is unlikely to succeed.h]hIn general, the #GP MSRs should not be used in guests. Their use likely indicates a bug in the guest. The guest may try to handle the #GP with a hypercall but it is unlikely to succeed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hThe #VE MSRs are typically able to be handled by the hypervisor. Guests can make a hypercall to the hypervisor to handle the #VE.h]hThe #VE MSRs are typically able to be handled by the hypervisor. Guests can make a hypercall to the hypervisor to handle the #VE.}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXThe "just works" MSRs do not need any special guest handling. They might be implemented by directly passing through the MSR to the hardware or by trapping and handling in the TDX module. Other than possibly being slow, these MSRs appear to function just as they would on bare metal.h]hX The “just works” MSRs do not need any special guest handling. They might be implemented by directly passing through the MSR to the hardware or by trapping and handling in the TDX module. Other than possibly being slow, these MSRs appear to function just as they would on bare metal.}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]rdmsr-wrmsr-behaviorah ]h"]rdmsr/wrmsr behaviorah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hCPUID Behaviorh]hCPUID Behavior}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjMhhhhhMubh)}(hFor some CPUID leaves and sub-leaves, the virtualized bit fields of CPUID return values (in guest EAX/EBX/ECX/EDX) are configurable by the hypervisor. For such cases, the Intel TDX module architecture defines two virtualization types:h]hFor some CPUID leaves and sub-leaves, the virtualized bit fields of CPUID return values (in guest EAX/EBX/ECX/EDX) are configurable by the hypervisor. For such cases, the Intel TDX module architecture defines two virtualization types:}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjMhhubjG)}(hhh](jL)}(hMBit fields for which the hypervisor controls the value seen by the guest TD. h]h)}(hLBit fields for which the hypervisor controls the value seen by the guest TD.h]hLBit fields for which the hypervisor controls the value seen by the guest TD.}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjoubah}(h]h ]h"]h$]h&]uh1jKhjlhhhhhNubjL)}(hBit fields for which the hypervisor configures the value such that the guest TD either sees their native value or a value of 0. For these bit fields, the hypervisor can mask off the native values, but it can not turn *on* values. h]h)}(hBit fields for which the hypervisor configures the value such that the guest TD either sees their native value or a value of 0. For these bit fields, the hypervisor can mask off the native values, but it can not turn *on* values.h](hBit fields for which the hypervisor configures the value such that the guest TD either sees their native value or a value of 0. For these bit fields, the hypervisor can mask off the native values, but it can not turn }(hjhhhNhNubhemphasis)}(h*on*h]hon}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh values.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jKhjlhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1jFhhhM hjMhhubh)}(hA #VE is generated for CPUID leaves and sub-leaves that the TDX module does not know how to handle. The guest kernel may ask the hypervisor for the value with a hypercall.h]hA #VE is generated for CPUID leaves and sub-leaves that the TDX module does not know how to handle. The guest kernel may ask the hypervisor for the value with a hypercall.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjMhhubeh}(h]cpuid-behaviorah ]h"]cpuid behaviorah$]h&]uh1hhjhhhhhMubeh}(h]new-tdx-exceptionsah ]h"]new tdx exceptionsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(h#VE on Memory Accessesh]h#VE on Memory Accesses}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hX There are essentially two classes of TDX memory: private and shared. Private memory receives full TDX protections. Its content is protected against access from the hypervisor. Shared memory is expected to be shared between guest and hypervisor and does not receive full TDX protections.h]hX There are essentially two classes of TDX memory: private and shared. Private memory receives full TDX protections. Its content is protected against access from the hypervisor. Shared memory is expected to be shared between guest and hypervisor and does not receive full TDX protections.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXA TD guest is in control of whether its memory accesses are treated as private or shared. It selects the behavior with a bit in its page table entries. This helps ensure that a guest does not place sensitive information in shared memory, exposing it to the untrusted hypervisor.h]hXA TD guest is in control of whether its memory accesses are treated as private or shared. It selects the behavior with a bit in its page table entries. This helps ensure that a guest does not place sensitive information in shared memory, exposing it to the untrusted hypervisor.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM!hjhhubh)}(hhh](h)}(h#VE on Shared Memoryh]h#VE on Shared Memory}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM'ubh)}(hXiAccess to shared mappings can cause a #VE. The hypervisor ultimately controls whether a shared memory access causes a #VE, so the guest must be careful to only reference shared pages it can safely handle a #VE. For instance, the guest should be careful not to access shared memory in the #VE handler before it reads the #VE info structure (TDG.VP.VEINFO.GET).h]hXiAccess to shared mappings can cause a #VE. The hypervisor ultimately controls whether a shared memory access causes a #VE, so the guest must be careful to only reference shared pages it can safely handle a #VE. For instance, the guest should be careful not to access shared memory in the #VE handler before it reads the #VE info structure (TDG.VP.VEINFO.GET).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM)hjhhubh)}(hXShared mapping content is entirely controlled by the hypervisor. The guest should only use shared mappings for communicating with the hypervisor. Shared mappings must never be used for sensitive memory content like kernel stacks. A good rule of thumb is that hypervisor-shared memory should be treated the same as memory mapped to userspace. Both the hypervisor and userspace are completely untrusted.h]hXShared mapping content is entirely controlled by the hypervisor. The guest should only use shared mappings for communicating with the hypervisor. Shared mappings must never be used for sensitive memory content like kernel stacks. A good rule of thumb is that hypervisor-shared memory should be treated the same as memory mapped to userspace. Both the hypervisor and userspace are completely untrusted.}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM/hjhhubh)}(hMMIO for virtual devices is implemented as shared memory. The guest must be careful not to access device MMIO regions unless it is also prepared to handle a #VE.h]hMMIO for virtual devices is implemented as shared memory. The guest must be careful not to access device MMIO regions unless it is also prepared to handle a #VE.}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM6hjhhubeh}(h]ve-on-shared-memoryah ]h"]#ve on shared memoryah$]h&]uh1hhjhhhhhM'ubh)}(hhh](h)}(h#VE on Private Pagesh]h#VE on Private Pages}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjGhhhhhM;ubh)}(hX8An access to private mappings can also cause a #VE. Since all kernel memory is also private memory, the kernel might theoretically need to handle a #VE on arbitrary kernel memory accesses. This is not feasible, so TDX guests ensure that all guest memory has been "accepted" before memory is used by the kernel.h]hX<An access to private mappings can also cause a #VE. Since all kernel memory is also private memory, the kernel might theoretically need to handle a #VE on arbitrary kernel memory accesses. This is not feasible, so TDX guests ensure that all guest memory has been “accepted” before memory is used by the kernel.}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM=hjGhhubh)}(hA modest amount of memory (typically 512M) is pre-accepted by the firmware before the kernel runs to ensure that the kernel can start up without being subjected to a #VE.h]hA modest amount of memory (typically 512M) is pre-accepted by the firmware before the kernel runs to ensure that the kernel can start up without being subjected to a #VE.}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMChjGhhubh)}(hThe hypervisor is permitted to unilaterally move accepted pages to a "blocked" state. However, if it does this, page access will not generate a #VE. It will, instead, cause a "TD Exit" where the hypervisor is required to handle the exception.h]hThe hypervisor is permitted to unilaterally move accepted pages to a “blocked” state. However, if it does this, page access will not generate a #VE. It will, instead, cause a “TD Exit” where the hypervisor is required to handle the exception.}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhjGhhubeh}(h]ve-on-private-pagesah ]h"]#ve on private pagesah$]h&]uh1hhjhhhhhM;ubeh}(h]ve-on-memory-accessesah ]h"]#ve on memory accessesah$]h&]uh1hhjhhhhhMubh)}(hhh](h)}(hLinux #VE handlerh]hLinux #VE handler}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMMubh)}(hJust like page faults or #GP's, #VE exceptions can be either handled or be fatal. Typically, an unhandled userspace #VE results in a SIGSEGV. An unhandled kernel #VE results in an oops.h]hJust like page faults or #GP’s, #VE exceptions can be either handled or be fatal. Typically, an unhandled userspace #VE results in a SIGSEGV. An unhandled kernel #VE results in an oops.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMOhjhhubh)}(hHandling nested exceptions on x86 is typically nasty business. A #VE could be interrupted by an NMI which triggers another #VE and hilarity ensues. The TDX #VE architecture anticipated this scenario and includes a feature to make it slightly less nasty.h]hHandling nested exceptions on x86 is typically nasty business. A #VE could be interrupted by an NMI which triggers another #VE and hilarity ensues. The TDX #VE architecture anticipated this scenario and includes a feature to make it slightly less nasty.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMShjhhubh)}(hDuring #VE handling, the TDX module ensures that all interrupts (including NMIs) are blocked. The block remains in place until the guest makes a TDG.VP.VEINFO.GET TDCALL. This allows the guest to control when interrupts or a new #VE can be delivered.h]hDuring #VE handling, the TDX module ensures that all interrupts (including NMIs) are blocked. The block remains in place until the guest makes a TDG.VP.VEINFO.GET TDCALL. This allows the guest to control when interrupts or a new #VE can be delivered.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMXhjhhubh)}(hHowever, the guest kernel must still be careful to avoid potential #VE-triggering actions (discussed above) while this block is in place. While the block is in place, any #VE is elevated to a double fault (#DF) which is not recoverable.h]hHowever, the guest kernel must still be careful to avoid potential #VE-triggering actions (discussed above) while this block is in place. While the block is in place, any #VE is elevated to a double fault (#DF) which is not recoverable.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM]hjhhubeh}(h]linux-ve-handlerah ]h"]linux #ve handlerah$]h&]uh1hhjhhhhhMMubh)}(hhh](h)}(h MMIO handlingh]h MMIO handling}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMcubh)}(hXRIn non-TDX VMs, MMIO is usually implemented by giving a guest access to a mapping which will cause a VMEXIT on access, and then the hypervisor emulates the access. That is not possible in TDX guests because VMEXIT will expose the register state to the host. TDX guests don't trust the host and can't have their state exposed to the host.h]hXVIn non-TDX VMs, MMIO is usually implemented by giving a guest access to a mapping which will cause a VMEXIT on access, and then the hypervisor emulates the access. That is not possible in TDX guests because VMEXIT will expose the register state to the host. TDX guests don’t trust the host and can’t have their state exposed to the host.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMehjhhubh)}(hIn TDX, MMIO regions typically trigger a #VE exception in the guest. The guest #VE handler then emulates the MMIO instruction inside the guest and converts it into a controlled TDCALL to the host, rather than exposing guest state to the host.h]hIn TDX, MMIO regions typically trigger a #VE exception in the guest. The guest #VE handler then emulates the MMIO instruction inside the guest and converts it into a controlled TDCALL to the host, rather than exposing guest state to the host.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMkhjhhubh)}(hXMMIO addresses on x86 are just special physical addresses. They can theoretically be accessed with any instruction that accesses memory. However, the kernel instruction decoding method is limited. It is only designed to decode instructions like those generated by io.h macros.h]hXMMIO addresses on x86 are just special physical addresses. They can theoretically be accessed with any instruction that accesses memory. However, the kernel instruction decoding method is limited. It is only designed to decode instructions like those generated by io.h macros.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMphjhhubh)}(hLMMIO access via other means (like structure overlays) may result in an oops.h]hLMMIO access via other means (like structure overlays) may result in an oops.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMuhjhhubeh}(h] mmio-handlingah ]h"] mmio handlingah$]h&]uh1hhjhhhhhMcubh)}(hhh](h)}(hShared Memory Conversionsh]hShared Memory Conversions}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hhhhhMyubh)}(hXWAll TDX guest memory starts out as private at boot. This memory can not be accessed by the hypervisor. However, some kernel users like device drivers might have a need to share data with the hypervisor. To do this, memory must be converted between shared and private. This can be accomplished using some existing memory encryption helpers:h]hXWAll TDX guest memory starts out as private at boot. This memory can not be accessed by the hypervisor. However, some kernel users like device drivers might have a need to share data with the hypervisor. To do this, memory must be converted between shared and private. This can be accomplished using some existing memory encryption helpers:}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM{hj4hhubh block_quote)}(hx* set_memory_decrypted() converts a range of pages to shared. * set_memory_encrypted() converts memory back to private. h]jG)}(hhh](jL)}(h;set_memory_decrypted() converts a range of pages to shared.h]h)}(hj^h]h;set_memory_decrypted() converts a range of pages to shared.}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj\ubah}(h]h ]h"]h$]h&]uh1jKhjYubjL)}(h8set_memory_encrypted() converts memory back to private. h]h)}(h7set_memory_encrypted() converts memory back to private.h]h7set_memory_encrypted() converts memory back to private.}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjsubah}(h]h ]h"]h$]h&]uh1jKhjYubeh}(h]h ]h"]h$]h&]j*uh1jFhhhMhjUubah}(h]h ]h"]h$]h&]uh1jShhhMhj4hhubh)}(hDevice drivers are the primary user of shared memory, but there's no need to touch every driver. DMA buffers and ioremap() do the conversions automatically.h]hDevice drivers are the primary user of shared memory, but there’s no need to touch every driver. DMA buffers and ioremap() do the conversions automatically.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj4hhubh)}(h]TDX uses SWIOTLB for most DMA allocations. The SWIOTLB buffer is converted to shared on boot.h]h]TDX uses SWIOTLB for most DMA allocations. The SWIOTLB buffer is converted to shared on boot.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj4hhubh)}(hxFor coherent DMA allocation, the DMA buffer gets converted on the allocation. Check force_dma_unencrypted() for details.h]hxFor coherent DMA allocation, the DMA buffer gets converted on the allocation. Check force_dma_unencrypted() for details.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj4hhubeh}(h]shared-memory-conversionsah ]h"]shared memory conversionsah$]h&]uh1hhjhhhhhMyubeh}(h]tdx-guest-supportah ]h"]tdx guest supportah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Attestationh]h Attestation}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hX2Attestation is used to verify the TDX guest trustworthiness to other entities before provisioning secrets to the guest. For example, a key server may want to use attestation to verify that the guest is the desired one before releasing the encryption keys to mount the encrypted rootfs or a secondary drive.h]hX2Attestation is used to verify the TDX guest trustworthiness to other entities before provisioning secrets to the guest. For example, a key server may want to use attestation to verify that the guest is the desired one before releasing the encryption keys to mount the encrypted rootfs or a secondary drive.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXThe TDX module records the state of the TDX guest in various stages of the guest boot process using the build time measurement register (MRTD) and runtime measurement registers (RTMR). Measurements related to the guest initial configuration and firmware image are recorded in the MRTD register. Measurements related to initial state, kernel image, firmware image, command line options, initrd, ACPI tables, etc are recorded in RTMR registers. For more details, as an example, please refer to TDX Virtual Firmware design specification, section titled "TD Measurement". At TDX guest runtime, the attestation process is used to attest to these measurements.h]hXThe TDX module records the state of the TDX guest in various stages of the guest boot process using the build time measurement register (MRTD) and runtime measurement registers (RTMR). Measurements related to the guest initial configuration and firmware image are recorded in the MRTD register. Measurements related to initial state, kernel image, firmware image, command line options, initrd, ACPI tables, etc are recorded in RTMR registers. For more details, as an example, please refer to TDX Virtual Firmware design specification, section titled “TD Measurement”. At TDX guest runtime, the attestation process is used to attest to these measurements.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXThe attestation process consists of two steps: TDREPORT generation and Quote generation.h]hXThe attestation process consists of two steps: TDREPORT generation and Quote generation.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXuTDX guest uses TDCALL[TDG.MR.REPORT] to get the TDREPORT (TDREPORT_STRUCT) from the TDX module. TDREPORT is a fixed-size data structure generated by the TDX module which contains guest-specific information (such as build and boot measurements), platform security version, and the MAC to protect the integrity of the TDREPORT. A user-provided 64-Byte REPORTDATA is used as input and included in the TDREPORT. Typically it can be some nonce provided by attestation service so the TDREPORT can be verified uniquely. More details about the TDREPORT can be found in Intel TDX Module specification, section titled "TDG.MR.REPORT Leaf".h]hXyTDX guest uses TDCALL[TDG.MR.REPORT] to get the TDREPORT (TDREPORT_STRUCT) from the TDX module. TDREPORT is a fixed-size data structure generated by the TDX module which contains guest-specific information (such as build and boot measurements), platform security version, and the MAC to protect the integrity of the TDREPORT. A user-provided 64-Byte REPORTDATA is used as input and included in the TDREPORT. Typically it can be some nonce provided by attestation service so the TDREPORT can be verified uniquely. More details about the TDREPORT can be found in Intel TDX Module specification, section titled “TDG.MR.REPORT Leaf”.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXcAfter getting the TDREPORT, the second step of the attestation process is to send it to the Quoting Enclave (QE) to generate the Quote. TDREPORT by design can only be verified on the local platform as the MAC key is bound to the platform. To support remote verification of the TDREPORT, TDX leverages Intel SGX Quoting Enclave to verify the TDREPORT locally and convert it to a remotely verifiable Quote. Method of sending TDREPORT to QE is implementation specific. Attestation software can choose whatever communication channel available (i.e. vsock or TCP/IP) to send the TDREPORT to QE and receive the Quote.h]hXcAfter getting the TDREPORT, the second step of the attestation process is to send it to the Quoting Enclave (QE) to generate the Quote. TDREPORT by design can only be verified on the local platform as the MAC key is bound to the platform. To support remote verification of the TDREPORT, TDX leverages Intel SGX Quoting Enclave to verify the TDREPORT locally and convert it to a remotely verifiable Quote. Method of sending TDREPORT to QE is implementation specific. Attestation software can choose whatever communication channel available (i.e. vsock or TCP/IP) to send the TDREPORT to QE and receive the Quote.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h] attestationah ]h"] attestationah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h Referencesh]h References}(hj4 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj1 hhhhhMubh)}(h)TDX reference material is collected here:h]h)TDX reference material is collected here:}(hjB hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj1 hhubh)}(hghttps://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.htmlh]h reference)}(hjR h]hghttps://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html}(hjV hhhNhNubah}(h]h ]h"]h$]h&]refurijR uh1jT hjP ubah}(h]h ]h"]h$]h&]uh1hhhhMhj1 hhubeh}(h] referencesah ]h"] referencesah$]h&]uh1hhhhhhhhMubeh}(h]!intel-trust-domain-extensions-tdxah ]h"]#intel trust domain extensions (tdx)ah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerj error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(jw jt jjjHjEj3j0jjjyjvjjjjj jj[jXjjjjjjjjjjjJjGjjjjjDjAjjjjj1j.jjj. j+ jo jl u nametypes}(jw jjHj3jjyjjj j[jjjjjjJjjjDjjj1jj. jo uh}(jt hjhjEjj0jKjj6jvjGjj|jjjjjXj jj^jjjjjj5jjjGjjjMjjjAjjjGjjj.jjj4j+ jjl j1 u footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.