aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorDavid P. Reed <dpreed@deepplum.com>2020-12-30 16:26:56 -0800
committerPaolo Bonzini <pbonzini@redhat.com>2021-02-04 05:27:32 -0500
commit53666664a3052e4ea3ddcb183460dfbc30f1d056 (patch)
treef575ffb127cb6be9c88a9d0e635bc4e2ecb24de5
parented72736183c45a413a8d6974dd04be90f514cb6b (diff)
downloadpowerpc-53666664a3052e4ea3ddcb183460dfbc30f1d056.tar.gz
x86/virt: Mark flags and memory as clobbered by VMXOFF
Explicitly tell the compiler that VMXOFF modifies flags (like all VMX instructions), and mark memory as clobbered since VMXOFF must not be reordered and also may have memory side effects (though the kernel really shouldn't be accessing the root VMCS anyways). Practically speaking, adding the clobbers is most likely a nop; the primary motivation is to properly document VMXOFF's behavior. For the flags clobber, both Clang and GCC automatically mark flags as clobbered; this is noted in commit 4b1e54786e48 ("KVM/x86: Use assembly instruction mnemonics instead of .byte streams"), which intentionally removed the previous clobber. But, neither Clang nor GCC documents this behavior, and there's no downside to including the clobber. For the memory clobber, the RFLAGS.IF and CR4.VMXE manipulations that immediately follow VMXOFF have compiler barriers of their own, i.e. VMXOFF can't get reordered after clearing CR4.VMXE, which is really what's of interest. Cc: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: David P. Reed <dpreed@deepplum.com> [sean: rewrote changelog, dropped comment adjustments] Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20201231002702.2223707-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-rw-r--r--arch/x86/include/asm/virtext.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index fda3e7747c2238..2cc58546766784 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -44,7 +44,8 @@ static inline int cpu_has_vmx(void)
static inline void cpu_vmxoff(void)
{
asm_volatile_goto("1: vmxoff\n\t"
- _ASM_EXTABLE(1b, %l[fault]) :::: fault);
+ _ASM_EXTABLE(1b, %l[fault])
+ ::: "cc", "memory" : fault);
fault:
cr4_clear_bits(X86_CR4_VMXE);
}