€•*yŒsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ(/translations/zh_CN/virt/hyperv/overview”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ(/translations/zh_TW/virt/hyperv/overview”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ(/translations/it_IT/virt/hyperv/overview”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ(/translations/ja_JP/virt/hyperv/overview”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ(/translations/ko_KR/virt/hyperv/overview”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ(/translations/sp_SP/virt/hyperv/overview”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒcomment”“”)”}”(hŒ SPDX-License-Identifier: GPL-2.0”h]”hŒ SPDX-License-Identifier: GPL-2.0”…””}”hh£sbah}”(h]”h ]”h"]”h$]”h&]”Œ xml:space”Œpreserve”uh1h¡hhhžhhŸŒB/var/lib/git/docbuild/linux/Documentation/virt/hyperv/overview.rst”h KubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒOverview”h]”hŒOverview”…””}”(hh»hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hh¶hžhhŸh³h KubhŒ paragraph”“”)”}”(hXThe Linux kernel contains a variety of code for running as a fully enlightened guest on Microsoft's Hyper-V hypervisor. Hyper-V consists primarily of a bare-metal hypervisor plus a virtual machine management service running in the parent partition (roughly equivalent to KVM and QEMU, for example). Guest VMs run in child partitions. In this documentation, references to Hyper-V usually encompass both the hypervisor and the VMM service without making a distinction about which functionality is provided by which component.”h]”hXThe Linux kernel contains a variety of code for running as a fully enlightened guest on Microsoft’s Hyper-V hypervisor. Hyper-V consists primarily of a bare-metal hypervisor plus a virtual machine management service running in the parent partition (roughly equivalent to KVM and QEMU, for example). Guest VMs run in child partitions. In this documentation, references to Hyper-V usually encompass both the hypervisor and the VMM service without making a distinction about which functionality is provided by which component.”…””}”(hhËhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h Khh¶hžhubhÊ)”}”(hŒÇHyper-V runs on x86/x64 and arm64 architectures, and Linux guests are supported on both. The functionality and behavior of Hyper-V is generally the same on both architectures unless noted otherwise.”h]”hŒÇHyper-V runs on x86/x64 and arm64 architectures, and Linux guests are supported on both. The functionality and behavior of Hyper-V is generally the same on both architectures unless noted otherwise.”…””}”(hhÙhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h Khh¶hžhubhµ)”}”(hhh]”(hº)”}”(hŒ&Linux Guest Communication with Hyper-V”h]”hŒ&Linux Guest Communication with Hyper-V”…””}”(hhêhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hhçhžhhŸh³h KubhÊ)”}”(hŒ=Linux guests communicate with Hyper-V in four different ways:”h]”hŒ=Linux guests communicate with Hyper-V in four different ways:”…””}”(hhøhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KhhçhžhubhŒ bullet_list”“”)”}”(hhh]”(hŒ list_item”“”)”}”(hŒßImplicit traps: As defined by the x86/x64 or arm64 architecture, some guest actions trap to Hyper-V. Hyper-V emulates the action and returns control to the guest. This behavior is generally invisible to the Linux kernel. ”h]”hÊ)”}”(hŒÞImplicit traps: As defined by the x86/x64 or arm64 architecture, some guest actions trap to Hyper-V. Hyper-V emulates the action and returns control to the guest. This behavior is generally invisible to the Linux kernel.”h]”hŒÞImplicit traps: As defined by the x86/x64 or arm64 architecture, some guest actions trap to Hyper-V. Hyper-V emulates the action and returns control to the guest. This behavior is generally invisible to the Linux kernel.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h Khj ubah}”(h]”h ]”h"]”h$]”h&]”uh1j hjhžhhŸh³h Nubj )”}”(hX‘Explicit hypercalls: Linux makes an explicit function call to Hyper-V, passing parameters. Hyper-V performs the requested action and returns control to the caller. Parameters are passed in processor registers or in memory shared between the Linux guest and Hyper-V. On x86/x64, hypercalls use a Hyper-V specific calling sequence. On arm64, hypercalls use the ARM standard SMCCC calling sequence. ”h]”hÊ)”}”(hXExplicit hypercalls: Linux makes an explicit function call to Hyper-V, passing parameters. Hyper-V performs the requested action and returns control to the caller. Parameters are passed in processor registers or in memory shared between the Linux guest and Hyper-V. On x86/x64, hypercalls use a Hyper-V specific calling sequence. On arm64, hypercalls use the ARM standard SMCCC calling sequence.”h]”hXExplicit hypercalls: Linux makes an explicit function call to Hyper-V, passing parameters. Hyper-V performs the requested action and returns control to the caller. Parameters are passed in processor registers or in memory shared between the Linux guest and Hyper-V. On x86/x64, hypercalls use a Hyper-V specific calling sequence. On arm64, hypercalls use the ARM standard SMCCC calling sequence.”…””}”(hj)hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h Khj%ubah}”(h]”h ]”h"]”h$]”h&]”uh1j hjhžhhŸh³h Nubj )”}”(hXMSynthetic register access: Hyper-V implements a variety of synthetic registers. On x86/x64 these registers appear as MSRs in the guest, and the Linux kernel can read or write these MSRs using the normal mechanisms defined by the x86/x64 architecture. On arm64, these synthetic registers must be accessed using explicit hypercalls. ”h]”hÊ)”}”(hXLSynthetic register access: Hyper-V implements a variety of synthetic registers. On x86/x64 these registers appear as MSRs in the guest, and the Linux kernel can read or write these MSRs using the normal mechanisms defined by the x86/x64 architecture. On arm64, these synthetic registers must be accessed using explicit hypercalls.”h]”hXLSynthetic register access: Hyper-V implements a variety of synthetic registers. On x86/x64 these registers appear as MSRs in the guest, and the Linux kernel can read or write these MSRs using the normal mechanisms defined by the x86/x64 architecture. On arm64, these synthetic registers must be accessed using explicit hypercalls.”…””}”(hjAhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K$hj=ubah}”(h]”h ]”h"]”h$]”h&]”uh1j hjhžhhŸh³h Nubj )”}”(hXVMBus: VMBus is a higher-level software construct that is built on the other 3 mechanisms. It is a message passing interface between the Hyper-V host and the Linux guest. It uses memory that is shared between Hyper-V and the guest, along with various signaling mechanisms. ”h]”hÊ)”}”(hXVMBus: VMBus is a higher-level software construct that is built on the other 3 mechanisms. It is a message passing interface between the Hyper-V host and the Linux guest. It uses memory that is shared between Hyper-V and the guest, along with various signaling mechanisms.”h]”hXVMBus: VMBus is a higher-level software construct that is built on the other 3 mechanisms. It is a message passing interface between the Hyper-V host and the Linux guest. It uses memory that is shared between Hyper-V and the guest, along with various signaling mechanisms.”…””}”(hjYhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K+hjUubah}”(h]”h ]”h"]”h$]”h&]”uh1j hjhžhhŸh³h Nubeh}”(h]”h ]”h"]”h$]”h&]”Œbullet”Œ*”uh1jhŸh³h KhhçhžhubhÊ)”}”(hXThe first three communication mechanisms are documented in the `Hyper-V Top Level Functional Spec (TLFS)`_. The TLFS describes general Hyper-V functionality and provides details on the hypercalls and synthetic registers. The TLFS is currently written for the x86/x64 architecture only.”h]”(hŒ?The first three communication mechanisms are documented in the ”…””}”(hjuhžhhŸNh NubhŒ reference”“”)”}”(hŒ+`Hyper-V Top Level Functional Spec (TLFS)`_”h]”hŒ(Hyper-V Top Level Functional Spec (TLFS)”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”Œname”Œ(Hyper-V Top Level Functional Spec (TLFS)”Œrefuri”ŒLhttps://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/tlfs/tlfs”uh1j}hjuŒresolved”KubhŒµ. The TLFS describes general Hyper-V functionality and provides details on the hypercalls and synthetic registers. The TLFS is currently written for the x86/x64 architecture only.”…””}”(hjuhžhhŸNh Nubeh}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K1hhçhžhubhŒtarget”“”)”}”(hŒz.. _Hyper-V Top Level Functional Spec (TLFS): https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/tlfs/tlfs”h]”h}”(h]”Œ&hyper-v-top-level-functional-spec-tlfs”ah ]”h"]”Œ(hyper-v top level functional spec (tlfs)”ah$]”h&]”jjuh1jœh K7hhçhžhhŸh³Œ referenced”KubhÊ)”}”(hŒ›VMBus is not documented. This documentation provides a high-level overview of VMBus and how it works, but the details can be discerned only from the code.”h]”hŒ›VMBus is not documented. This documentation provides a high-level overview of VMBus and how it works, but the details can be discerned only from the code.”…””}”(hj«hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K9hhçhžhubeh}”(h]”Œ&linux-guest-communication-with-hyper-v”ah ]”h"]”Œ&linux guest communication with hyper-v”ah$]”h&]”uh1h´hh¶hžhhŸh³h Kubhµ)”}”(hhh]”(hº)”}”(hŒSharing Memory”h]”hŒSharing Memory”…””}”(hjÄhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjÁhžhhŸh³h K>ubhÊ)”}”(hŒ‰Many aspects are communication between Hyper-V and Linux are based on sharing memory. Such sharing is generally accomplished as follows:”h]”hŒ‰Many aspects are communication between Hyper-V and Linux are based on sharing memory. Such sharing is generally accomplished as follows:”…””}”(hjÒhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K?hjÁhžhubj)”}”(hhh]”(j )”}”(hŒXLinux allocates memory from its physical address space using standard Linux mechanisms. ”h]”hÊ)”}”(hŒWLinux allocates memory from its physical address space using standard Linux mechanisms.”h]”hŒWLinux allocates memory from its physical address space using standard Linux mechanisms.”…””}”(hjçhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KChjãubah}”(h]”h ]”h"]”h$]”h&]”uh1j hjàhžhhŸh³h Nubj )”}”(hXÕLinux tells Hyper-V the guest physical address (GPA) of the allocated memory. Many shared areas are kept to 1 page so that a single GPA is sufficient. Larger shared areas require a list of GPAs, which usually do not need to be contiguous in the guest physical address space. How Hyper-V is told about the GPA or list of GPAs varies. In some cases, a single GPA is written to a synthetic register. In other cases, a GPA or list of GPAs is sent in a VMBus message. ”h]”hÊ)”}”(hXÔLinux tells Hyper-V the guest physical address (GPA) of the allocated memory. Many shared areas are kept to 1 page so that a single GPA is sufficient. Larger shared areas require a list of GPAs, which usually do not need to be contiguous in the guest physical address space. How Hyper-V is told about the GPA or list of GPAs varies. In some cases, a single GPA is written to a synthetic register. In other cases, a GPA or list of GPAs is sent in a VMBus message.”h]”hXÔLinux tells Hyper-V the guest physical address (GPA) of the allocated memory. Many shared areas are kept to 1 page so that a single GPA is sufficient. Larger shared areas require a list of GPAs, which usually do not need to be contiguous in the guest physical address space. How Hyper-V is told about the GPA or list of GPAs varies. In some cases, a single GPA is written to a synthetic register. In other cases, a GPA or list of GPAs is sent in a VMBus message.”…””}”(hjÿhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KFhjûubah}”(h]”h ]”h"]”h$]”h&]”uh1j hjàhžhhŸh³h Nubj )”}”(hŒ‡Hyper-V translates the GPAs into "real" physical memory addresses, and creates a virtual mapping that it can use to access the memory. ”h]”hÊ)”}”(hŒ†Hyper-V translates the GPAs into "real" physical memory addresses, and creates a virtual mapping that it can use to access the memory.”h]”hŒŠHyper-V translates the GPAs into “real†physical memory addresses, and creates a virtual mapping that it can use to access the memory.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KOhjubah}”(h]”h ]”h"]”h$]”h&]”uh1j hjàhžhhŸh³h Nubj )”}”(hŒoLinux can later revoke sharing it has previously established by telling Hyper-V to set the shared GPA to zero. ”h]”hÊ)”}”(hŒnLinux can later revoke sharing it has previously established by telling Hyper-V to set the shared GPA to zero.”h]”hŒnLinux can later revoke sharing it has previously established by telling Hyper-V to set the shared GPA to zero.”…””}”(hj/hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KRhj+ubah}”(h]”h ]”h"]”h$]”h&]”uh1j hjàhžhhŸh³h Nubeh}”(h]”h ]”h"]”h$]”h&]”jsjtuh1jhŸh³h KChjÁhžhubhÊ)”}”(hXvHyper-V operates with a page size of 4 Kbytes. GPAs communicated to Hyper-V may be in the form of page numbers, and always describe a range of 4 Kbytes. Since the Linux guest page size on x86/x64 is also 4 Kbytes, the mapping from guest page to Hyper-V page is 1-to-1. On arm64, Hyper-V supports guests with 4/16/64 Kbyte pages as defined by the arm64 architecture. If Linux is using 16 or 64 Kbyte pages, Linux code must be careful to communicate with Hyper-V only in terms of 4 Kbyte pages. HV_HYP_PAGE_SIZE and related macros are used in code that communicates with Hyper-V so that it works correctly in all configurations.”h]”hXvHyper-V operates with a page size of 4 Kbytes. GPAs communicated to Hyper-V may be in the form of page numbers, and always describe a range of 4 Kbytes. Since the Linux guest page size on x86/x64 is also 4 Kbytes, the mapping from guest page to Hyper-V page is 1-to-1. On arm64, Hyper-V supports guests with 4/16/64 Kbyte pages as defined by the arm64 architecture. If Linux is using 16 or 64 Kbyte pages, Linux code must be careful to communicate with Hyper-V only in terms of 4 Kbyte pages. HV_HYP_PAGE_SIZE and related macros are used in code that communicates with Hyper-V so that it works correctly in all configurations.”…””}”(hjIhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KUhjÁhžhubhÊ)”}”(hXcAs described in the TLFS, a few memory pages shared between Hyper-V and the Linux guest are "overlay" pages. With overlay pages, Linux uses the usual approach of allocating guest memory and telling Hyper-V the GPA of the allocated memory. But Hyper-V then replaces that physical memory page with a page it has allocated, and the original physical memory page is no longer accessible in the guest VM. Linux may access the memory normally as if it were the memory that it originally allocated. The "overlay" behavior is visible only because the contents of the page (as seen by Linux) change at the time that Linux originally establishes the sharing and the overlay page is inserted. Similarly, the contents change if Linux revokes the sharing, in which case Hyper-V removes the overlay page, and the guest page originally allocated by Linux becomes visible again.”h]”hXkAs described in the TLFS, a few memory pages shared between Hyper-V and the Linux guest are “overlay†pages. With overlay pages, Linux uses the usual approach of allocating guest memory and telling Hyper-V the GPA of the allocated memory. But Hyper-V then replaces that physical memory page with a page it has allocated, and the original physical memory page is no longer accessible in the guest VM. Linux may access the memory normally as if it were the memory that it originally allocated. The “overlay†behavior is visible only because the contents of the page (as seen by Linux) change at the time that Linux originally establishes the sharing and the overlay page is inserted. Similarly, the contents change if Linux revokes the sharing, in which case Hyper-V removes the overlay page, and the guest page originally allocated by Linux becomes visible again.”…””}”(hjWhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K`hjÁhžhubhÊ)”}”(hX«Before Linux does a kexec to a kdump kernel or any other kernel, memory shared with Hyper-V should be revoked. Hyper-V could modify a shared page or remove an overlay page after the new kernel is using the page for a different purpose, corrupting the new kernel. Hyper-V does not provide a single "set everything" operation to guest VMs, so Linux code must individually revoke all sharing before doing kexec. See hv_kexec_handler() and hv_crash_handler(). But the crash/panic path still has holes in cleanup because some shared pages are set using per-CPU synthetic registers and there's no mechanism to revoke the shared pages for CPUs other than the CPU running the panic path.”h]”hX±Before Linux does a kexec to a kdump kernel or any other kernel, memory shared with Hyper-V should be revoked. Hyper-V could modify a shared page or remove an overlay page after the new kernel is using the page for a different purpose, corrupting the new kernel. Hyper-V does not provide a single “set everything†operation to guest VMs, so Linux code must individually revoke all sharing before doing kexec. See hv_kexec_handler() and hv_crash_handler(). But the crash/panic path still has holes in cleanup because some shared pages are set using per-CPU synthetic registers and there’s no mechanism to revoke the shared pages for CPUs other than the CPU running the panic path.”…””}”(hjehžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KohjÁhžhubeh}”(h]”Œsharing-memory”ah ]”h"]”Œsharing memory”ah$]”h&]”uh1h´hh¶hžhhŸh³h K>ubhµ)”}”(hhh]”(hº)”}”(hŒCPU Management”h]”hŒCPU Management”…””}”(hj~hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj{hžhhŸh³h K|ubhÊ)”}”(hXHyper-V does not have a ability to hot-add or hot-remove a CPU from a running VM. However, Windows Server 2019 Hyper-V and earlier versions may provide guests with ACPI tables that indicate more CPUs than are actually present in the VM. As is normal, Linux treats these additional CPUs as potential hot-add CPUs, and reports them as such even though Hyper-V will never actually hot-add them. Starting in Windows Server 2022 Hyper-V, the ACPI tables reflect only the CPUs actually present in the VM, so Linux does not report any hot-add CPUs.”h]”hXHyper-V does not have a ability to hot-add or hot-remove a CPU from a running VM. However, Windows Server 2019 Hyper-V and earlier versions may provide guests with ACPI tables that indicate more CPUs than are actually present in the VM. As is normal, Linux treats these additional CPUs as potential hot-add CPUs, and reports them as such even though Hyper-V will never actually hot-add them. Starting in Windows Server 2022 Hyper-V, the ACPI tables reflect only the CPUs actually present in the VM, so Linux does not report any hot-add CPUs.”…””}”(hjŒhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K}hj{hžhubhÊ)”}”(hXA Linux guest CPU may be taken offline using the normal Linux mechanisms, provided no VMBus channel interrupts are assigned to the CPU. See the section on VMBus Interrupts for more details on how VMBus channel interrupts can be re-assigned to permit taking a CPU offline.”h]”hXA Linux guest CPU may be taken offline using the normal Linux mechanisms, provided no VMBus channel interrupts are assigned to the CPU. See the section on VMBus Interrupts for more details on how VMBus channel interrupts can be re-assigned to permit taking a CPU offline.”…””}”(hjšhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K‡hj{hžhubeh}”(h]”Œcpu-management”ah ]”h"]”Œcpu management”ah$]”h&]”uh1h´hh¶hžhhŸh³h K|ubhµ)”}”(hhh]”(hº)”}”(hŒ32-bit and 64-bit”h]”hŒ32-bit and 64-bit”…””}”(hj³hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj°hžhhŸh³h KŽubhÊ)”}”(hŒÒOn x86/x64, Hyper-V supports 32-bit and 64-bit guests, and Linux will build and run in either version. While the 32-bit version is expected to work, it is used rarely and may suffer from undetected regressions.”h]”hŒÒOn x86/x64, Hyper-V supports 32-bit and 64-bit guests, and Linux will build and run in either version. While the 32-bit version is expected to work, it is used rarely and may suffer from undetected regressions.”…””}”(hjÁhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h Khj°hžhubhÊ)”}”(hŒ.On arm64, Hyper-V supports only 64-bit guests.”h]”hŒ.On arm64, Hyper-V supports only 64-bit guests.”…””}”(hjÏhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K”hj°hžhubeh}”(h]”Œbit-and-64-bit”ah ]”h"]”Œ32-bit and 64-bit”ah$]”h&]”uh1h´hh¶hžhhŸh³h KŽubhµ)”}”(hhh]”(hº)”}”(hŒ Endian-ness”h]”hŒ Endian-ness”…””}”(hjèhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjåhžhhŸh³h K—ubhÊ)”}”(hŒõAll communication between Hyper-V and guest VMs uses Little-Endian format on both x86/x64 and arm64. Big-endian format on arm64 is not supported by Hyper-V, and Linux code does not use endian-ness macros when accessing data shared with Hyper-V.”h]”hŒõAll communication between Hyper-V and guest VMs uses Little-Endian format on both x86/x64 and arm64. Big-endian format on arm64 is not supported by Hyper-V, and Linux code does not use endian-ness macros when accessing data shared with Hyper-V.”…””}”(hjöhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K˜hjåhžhubeh}”(h]”Œ endian-ness”ah ]”h"]”Œ endian-ness”ah$]”h&]”uh1h´hh¶hžhhŸh³h K—ubhµ)”}”(hhh]”(hº)”}”(hŒ Versioning”h]”hŒ Versioning”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj hžhhŸh³h KžubhÊ)”}”(hŒÐCurrent Linux kernels operate correctly with older versions of Hyper-V back to Windows Server 2012 Hyper-V. Support for running on the original Hyper-V release in Windows Server 2008/2008 R2 has been removed.”h]”hŒÐCurrent Linux kernels operate correctly with older versions of Hyper-V back to Windows Server 2012 Hyper-V. Support for running on the original Hyper-V release in Windows Server 2008/2008 R2 has been removed.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KŸhj hžhubhÊ)”}”(hX¬A Linux guest on Hyper-V outputs in dmesg the version of Hyper-V it is running on. This version is in the form of a Windows build number and is for display purposes only. Linux code does not test this version number at runtime to determine available features and functionality. Hyper-V indicates feature/function availability via flags in synthetic MSRs that Hyper-V provides to the guest, and the guest code tests these flags.”h]”hX¬A Linux guest on Hyper-V outputs in dmesg the version of Hyper-V it is running on. This version is in the form of a Windows build number and is for display purposes only. Linux code does not test this version number at runtime to determine available features and functionality. Hyper-V indicates feature/function availability via flags in synthetic MSRs that Hyper-V provides to the guest, and the guest code tests these flags.”…””}”(hj+hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K¤hj hžhubhÊ)”}”(hXVMBus has its own protocol version that is negotiated during the initial VMBus connection from the guest to Hyper-V. This version number is also output to dmesg during boot. This version number is checked in a few places in the code to determine if specific functionality is present.”h]”hXVMBus has its own protocol version that is negotiated during the initial VMBus connection from the guest to Hyper-V. This version number is also output to dmesg during boot. This version number is checked in a few places in the code to determine if specific functionality is present.”…””}”(hj9hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K¬hj hžhubhÊ)”}”(hX2Furthermore, each synthetic device on VMBus also has a protocol version that is separate from the VMBus protocol version. Device drivers for these synthetic devices typically negotiate the device protocol version, and may test that protocol version to determine if specific device functionality is present.”h]”hX2Furthermore, each synthetic device on VMBus also has a protocol version that is separate from the VMBus protocol version. Device drivers for these synthetic devices typically negotiate the device protocol version, and may test that protocol version to determine if specific device functionality is present.”…””}”(hjGhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K²hj hžhubeh}”(h]”Œ versioning”ah ]”h"]”Œ versioning”ah$]”h&]”uh1h´hh¶hžhhŸh³h Kžubhµ)”}”(hhh]”(hº)”}”(hŒCode Packaging”h]”hŒCode Packaging”…””}”(hj`hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj]hžhhŸh³h K¹ubhÊ)”}”(hŒOHyper-V related code appears in the Linux kernel code tree in three main areas:”h]”hŒOHyper-V related code appears in the Linux kernel code tree in three main areas:”…””}”(hjnhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h Kºhj]hžhubhŒenumerated_list”“”)”}”(hhh]”(j )”}”(hŒ drivers/hv ”h]”hÊ)”}”(hŒ drivers/hv”h]”hŒ drivers/hv”…””}”(hj…hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K½hjubah}”(h]”h ]”h"]”h$]”h&]”uh1j hj~hžhhŸh³h Nubj )”}”(hŒ&arch/x86/hyperv and arch/arm64/hyperv ”h]”hÊ)”}”(hŒ%arch/x86/hyperv and arch/arm64/hyperv”h]”hŒ%arch/x86/hyperv and arch/arm64/hyperv”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h K¿hj™ubah}”(h]”h ]”h"]”h$]”h&]”uh1j hj~hžhhŸh³h Nubj )”}”(hŒ\individual device driver areas such as drivers/scsi, drivers/net, drivers/clocksource, etc. ”h]”hÊ)”}”(hŒ[individual device driver areas such as drivers/scsi, drivers/net, drivers/clocksource, etc.”h]”hŒ[individual device driver areas such as drivers/scsi, drivers/net, drivers/clocksource, etc.”…””}”(hjµhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KÁhj±ubah}”(h]”h ]”h"]”h$]”h&]”uh1j hj~hžhhŸh³h Nubeh}”(h]”h ]”h"]”h$]”h&]”Œenumtype”Œarabic”Œprefix”hŒsuffix”Œ.”uh1j|hj]hžhhŸh³h K½ubhÊ)”}”(hŒ°A few miscellaneous files appear elsewhere. See the full list under "Hyper-V/Azure CORE AND DRIVERS" and "DRM DRIVER FOR HYPERV SYNTHETIC VIDEO DEVICE" in the MAINTAINERS file.”h]”hŒ¸A few miscellaneous files appear elsewhere. See the full list under “Hyper-V/Azure CORE AND DRIVERS†and “DRM DRIVER FOR HYPERV SYNTHETIC VIDEO DEVICE†in the MAINTAINERS file.”…””}”(hjÔhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KÄhj]hžhubhÊ)”}”(hŒœThe code in #1 and #2 is built only when CONFIG_HYPERV is set. Similarly, the code for most Hyper-V related drivers is built only when CONFIG_HYPERV is set.”h]”hŒœThe code in #1 and #2 is built only when CONFIG_HYPERV is set. Similarly, the code for most Hyper-V related drivers is built only when CONFIG_HYPERV is set.”…””}”(hjâhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KÈhj]hžhubhÊ)”}”(hŒáMost Hyper-V related code in #1 and #3 can be built as a module. The architecture specific code in #2 must be built-in. Also, drivers/hv/hv_common.c is low-level code that is common across architectures and must be built-in.”h]”hŒáMost Hyper-V related code in #1 and #3 can be built as a module. The architecture specific code in #2 must be built-in. Also, drivers/hv/hv_common.c is low-level code that is common across architectures and must be built-in.”…””}”(hjðhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhŸh³h KÌhj]hžhubeh}”(h]”Œcode-packaging”ah ]”h"]”Œcode packaging”ah$]”h&]”uh1h´hh¶hžhhŸh³h K¹ubeh}”(h]”Œoverview”ah ]”h"]”Œoverview”ah$]”h&]”uh1h´hhhžhhŸh³h Kubeh}”(h]”h ]”h"]”h$]”h&]”Œsource”h³uh1hŒcurrent_source”NŒ current_line”NŒsettings”Œdocutils.frontend”ŒValues”“”)”}”(h¹NŒ generator”NŒ datestamp”NŒ source_link”NŒ source_url”NŒ toc_backlinks”Œentry”Œfootnote_backlinks”KŒ sectnum_xform”KŒstrip_comments”NŒstrip_elements_with_classes”NŒ strip_classes”NŒ report_level”KŒ halt_level”KŒexit_status_level”KŒdebug”NŒwarning_stream”NŒ traceback”ˆŒinput_encoding”Œ utf-8-sig”Œinput_encoding_error_handler”Œstrict”Œoutput_encoding”Œutf-8”Œoutput_encoding_error_handler”j1Œerror_encoding”Œutf-8”Œerror_encoding_error_handler”Œbackslashreplace”Œ language_code”Œen”Œrecord_dependencies”NŒconfig”NŒ id_prefix”hŒauto_id_prefix”Œid”Œ dump_settings”NŒdump_internals”NŒdump_transforms”NŒdump_pseudo_xml”NŒexpose_internals”NŒstrict_visitor”NŒ_disable_config”NŒ_source”h³Œ _destination”NŒ _config_files”]”Œ7/var/lib/git/docbuild/linux/Documentation/docutils.conf”aŒfile_insertion_enabled”ˆŒ raw_enabled”KŒline_length_limit”M'Œpep_references”NŒ pep_base_url”Œhttps://peps.python.org/”Œpep_file_url_template”Œpep-%04d”Œrfc_references”NŒ rfc_base_url”Œ&https://datatracker.ietf.org/doc/html/”Œ tab_width”KŒtrim_footnote_reference_space”‰Œsyntax_highlight”Œlong”Œ smart_quotes”ˆŒsmartquotes_locales”]”Œcharacter_level_inline_markup”‰Œdoctitle_xform”‰Œ docinfo_xform”KŒsectsubtitle_xform”‰Œ image_loading”Œlink”Œembed_stylesheet”‰Œcloak_email_addresses”ˆŒsection_self_link”‰Œenv”NubŒreporter”NŒindirect_targets”]”Œsubstitution_defs”}”Œsubstitution_names”}”Œrefnames”}”Œ(hyper-v top level functional spec (tlfs)”]”jasŒrefids”}”Œnameids”}”(j jj¾j»j§j¤jxjuj­jªjâjßj jjZjWjjuŒ nametypes”}”(j ‰j¾‰j§ˆjx‰j­‰jâ‰j ‰jZ‰j‰uh}”(jh¶j»hçj¤jžjujÁjªj{jßj°jjåjWj jj]uŒ footnote_refs”}”Œ citation_refs”}”Œ autofootnotes”]”Œautofootnote_refs”]”Œsymbol_footnotes”]”Œsymbol_footnote_refs”]”Œ footnotes”]”Œ citations”]”Œautofootnote_start”KŒsymbol_footnote_start”KŒ id_counter”Œ collections”ŒCounter”“”}”…”R”Œparse_messages”]”Œtransform_messages”]”Œ transformer”NŒ include_log”]”Œ decoration”Nhžhub.