aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2020-07-10x86: Allow to limit maximum RAM addressHEADmasterNadav Amit3-0/+12
While there is a feature to limit RAM memory, we should also be able to limit the maximum RAM address. Specifically, svm can only work when the maximum RAM address is lower than 4G, as it does not map the rest of the memory into the NPT. Allow to do so using the firmware, when in fact the expected use-case is to provide this infomation on bare-metal using the MEMLIMIT parameter in initrd. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200710183320.27266-5-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-10x86: remove dead writes from setup_mmu()Nadav Amit1-3/+0
Recent changes cause end_of_memory to be disregarded in 32-bit. Remove the dead writes to it. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200710183320.27266-4-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-10x86: svm: present bit is set on nested page-faultsNadav Amit1-2/+2
On nested page-faults due to write-protect or reserved bits, the present-bit in EXITINFO1 is set, as confirmed on bare-metal. Set the expected result accordingly. This indicates that KVM has a bug. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200710183320.27266-3-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-10cstart: do not assume CR4 starts as zeroPaolo Bonzini2-4/+2
The BIOS might leave some bits set in CR4; for example, CR4.DE=1 would cause the SVM test for the DR intercept to fail, because DR4/DR5 can only be written when CR4.DE is clear, and otherwise trigger a #GP exception. Reported-by: Nadav Amit <namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-08kvm-unit-tests: nSVM: Test that MBZ bits in CR3 and CR4 are not set on vmrun ↵Krish Sadhukhan2-17/+131
of nested guests According to section "Canonicalization and Consistency Checks" in APM vol. 2, the following guest state is illegal: "Any MBZ bit of CR3 is set." "Any MBZ bit of CR4 is set." Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <1594168797-29444-4-git-send-email-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-08svm: fix clobbers for svm_vmrunPaolo Bonzini1-1/+1
r15 is used by ASM_VMRUN_CMD, so we need to mark it as clobbered. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-06lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZEClaudio Imbrenda2-8/+30
Allow allocating aligned virtual memory with alignment larger than only one page. Add a check that the backing pages were actually allocated. Export the alloc_vpages_aligned function to allow users to allocate non-backed aligned virtual addresses. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200706164324.81123-5-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-06lib/alloc_page: move get_order and is_power_of_2 to a bitops.hClaudio Imbrenda5-11/+11
The functions get_order and is_power_of_2 are simple and should probably be in a header, like similar simple functions in bitops.h Since they concern bit manipulation, the logical place for them is in bitops.h Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200706164324.81123-4-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-06lib/alloc_page: change some parameter typesClaudio Imbrenda2-7/+7
For size parameters, size_t is probably semantically more appropriate than unsigned long (although they map to the same value). For order, unsigned long is just too big. Also, get_order returns an unsigned int anyway. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Message-Id: <20200706164324.81123-3-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-06lib/vmalloc: fix pages count local variable to be size_tClaudio Imbrenda1-2/+2
Since size is of type size_t, size >> PAGE_SHIFT might still be too big for a normal unsigned int. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Message-Id: <20200706164324.81123-2-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-06vmx: remove unnecessary #ifdef __x86_64__Paolo Bonzini1-12/+0
The VMX tests are 64-bit only, so checking the architecture is unnecessary. Also, if the tests supported 32-bits environments the #ifdef would probably go in test_canonical. Reported-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-06kvm-unit-tests: nVMX: Test GUEST_LIMIT_GDTR and GUEST_LIMIT_IDTR on vmentry ↵Krish Sadhukhan1-0/+17
of nested guests According to section "Checks on Guest Descriptor-Table Registers" in Intel SDM vol 3C, the following checks are performed on the Guest Descriptor-Table Registers on vmentry of nested guests: - Bits 31:16 of each limit field must be 0. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20200523002603.32450-4-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-06kvm-unit-tests: nVMX: Test GUEST_BASE_GDTR and GUEST_BASE_IDTR on vmentry of ↵Krish Sadhukhan1-0/+5
nested guests According to section "Checks on Guest Descriptor-Table Registers" in Intel SDM vol 3C, the following check is performed on the Guest Descriptor-Table Registers on vmentry of nested guests: - On processors that support Intel 64 architecture, the base-address fields must contain canonical addresses. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20200523002603.32450-2-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-06kvm-unit-tests: x86: Remove duplicate instance of 'vmcb'Krish Sadhukhan1-1/+0
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20200522221954.32131-5-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-03x86: access: Add test for illegal toggling of CR4.LA57 in 64-bit modeSean Christopherson2-0/+13
Add a test to verify that KVM correctly injects a #GP if the guest attempts to toggle CR4.LA57 while 64-bit mode is active. Use two versions of the toggling, one to toggle only LA57 and a second to toggle PSE in addition to LA57. KVM doesn't intercept LA57, i.e. toggling only LA57 effectively tests the CPU, not KVM. Use PSE as the whipping boy as it will not trigger a #GP on its own, is universally available, is ignored in 64-bit mode, and most importantly is trapped by KVM. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200703021903.5683-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-02x86: realmode: fix serial_init()Nadav Amit1-0/+9
In some setups serial output from the real-mode tests is corrupted. I do not know the serial port initialization code well, but the protected mode initialization code is different than the real-mode code. Using the protected mode serial port initialization fixes the problem. Keeping the tradition of code duplication between real-mode and protected mode, this patch copies the missing initialization into real-mode serial port initialization. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200701193045.31247-1-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01kvm-unit-tests: nSVM: Test that DR6[63:32], DR7[63:32] and EFER reserved ↵Krish Sadhukhan2-8/+53
bits are not set on vmrun of nested guests According to section "Canonicalization and Consistency Checks" in APM vol. 2 the following guest state is illegal: "DR6[63:32] are not zero." "DR7[63:32] are not zero." "Any MBZ bit of EFER is set." Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20200522221954.32131-4-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01x86: nVMX: Print more (accurate) info if RDTSC diff test failsSean Christopherson1-5/+6
Snapshot the delta of the last run and display it in the report if the test fails. Abort the run loop as soon as the threshold is reached so that the displayed delta is guaranteed to a failed delta. Displaying the delta helps triage failures, e.g. is my system completely broken or did I get unlucky, and aborting the loop early saves 99900 runs when the system is indeed broken. Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Aaron Lewis <aaronlewis@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200124234608.10754-1-sean.j.christopherson@intel.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Aaron Lewis <aaronlewis@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01x86: svm: avoid advancing rip incorrectly on exc_injectNadav Amit1-2/+2
exc_inject advances the ripon every stage, so it can do so 3 times, but there are only 2 vmmcall instructions that the guest runs. So, if a failure happens on the last test, there is no vmmcall instruction to trigger an exit. Advance the rip only in the two stages in which vmmcall is expected to run. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200630094516.22983-6-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01x86: svm: use PML4E in npt_rsvd_pfwalk_prepareNadav Amit3-2/+8
According to AMD manual bit 8 of the PDPE is not reserved, but it is in PML4E. Reported-by: Nadav Amit <namit@vmware.com> Message-Id: <20200630094516.22983-5-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01x86: svm: flush TLB on each testNadav Amit1-0/+1
Several svm tests change PTEs but do not flush the TLB. To avoid messing around or encountering new bugs in the future, flush the TLB on every test. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200630094516.22983-4-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01x86: svm: check TSC adjust supportNadav Amit1-1/+6
MSR_IA32_TSC_ADJUST may be supported by KVM on AMD machines, but it does not show on AMD manual. Check CPUID to see if it supported before running the relevant tests. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200630094516.22983-3-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01x86: Remove boot_idt assembly assignmentNadav Amit1-3/+0
boot_idt is now a symbol. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200630094516.22983-2-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01gitlab-ci.yml: Extend the lists of tests that we run with TCGThomas Huth1-9/+15
Thank to the recent fixes, there are now quite a lot of additional 32-bit x86 tests that we can run in the CI. And thanks to the update to Fedora 32 (that introduced a newer version of QEMU), there are now also some additional tests that we can run with TCG for the other architectures. Note that for arm/aarch64, we now also set the MAX_SMP to be able to run SMP-tests with TCG in the single-threaded CI containers, too. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20200701100615.7975-1-thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-07-01scripts: Fix the check whether testname is in the only_tests listPaolo Bonzini1-3/+8
When you currently run ./run_tests.sh ioapic-split the kvm-unit-tests run scripts do not only execute the "ioapic-split" test, but also the "ioapic" test, which is quite surprising. This happens because we use "grep -w" for checking whether a test should be run or not. Because "grep -w" does not consider the "-" character as part of a word, "ioapic" successfully matches against "ioapic-split". To fix the issue, use spaces as the only delimiter when running "grep", removing the problematic "-w" flag from the invocation. While at it, add "-F" to avoid unintended use of regular expression metacharacters. Reported-by: Thomas Huth <thuth@redhat.com> Tested-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-29x86: pmu: fix failures on 32-bit due to wrong masksNadav Amit1-4/+4
Some mask computation are using long constants instead of long long constants, which causes test failures on x86-32. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200619193909.18949-1-namit@vmware.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-26x86: realmode: fix lss testNadav Amit1-2/+2
Running lss with some random descriptor and then performing pop does not work so well. Use mov instructions instead of push/pop pair. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200626092333.2830-4-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-26x86: realmode: hlt loop as fallback on exitNadav Amit1-0/+4
For systems without emulated devices (e.g., bare-metal), use halt-loop when exiting the realmode test. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200626092333.2830-3-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-26x86: realmode: initialize idtrNadav Amit1-0/+2
The realmode test does not initialize the IDTR, assuming that its base is zero and its limit 0x3ff. Initialize it, as the bootloader might not set it as such. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200626092333.2830-2-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-26x86: map bottom 2G 1:1 into page tablesPaolo Bonzini1-2/+1
Right now only addresses up to the highest RAM memory address are are mapped 1:1 into the 32-bit page tables, but this also excludes ACPI-reserved areas that are higher than the highest RAM memory address. Depending on the memory layout, this may prevent the tests from accessing the ACPI tables after setup_vm. Unconditionally including the bottom 2G of memory fixes that. We do rely on the ACPI tables being in the first 2GB of memory, which is not necessarily true on bare metal; fixing that requires adding calls to something like Linux's kmap/kunmap. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-25x86: setup segment registers before percpu areasPaolo Bonzini1-4/+4
The base of the percpu area is stored in the %gs base, and writing to %gs destroys it. Move setup_segments earlier, before the %gs base is written, and keep setup_percpu_area close so that the base is updated close to the selector. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-25x86: fix smp_stacktop on 32-bitNadav Amit1-1/+1
smp_stacktop in 32-bit is fixed to some magic address. Use the address of the memory that was reserved for the stack instead. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200624203602.44659-1-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-25x86: fix stack pointer after callPaolo Bonzini1-0/+1
Since setup_multiboot has a C calling convention, the stack pointer must be adjusted after the call. Without this change, the bottom of the percpu area would be 4 bytes below the bottom of the stack and overlap the top 4 bytes of CPU 1's stack. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-25x86: move IDT away from address 0Paolo Bonzini2-4/+17
Address 0 is also used for the SIPI vector (which is probably something worth changing as well), and now that we call setup_idt very early the SIPI vector overwrites the first few bytes of the IDT, and in particular the #DE handler. Fix this for both 32-bit and 64-bit, even though the different form of the descriptors meant that only 32-bit showed a failure. Reported-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-24Revert "SVM: move guest past HLT"Vitaly Kuznetsov1-8/+0
'nmi_hlt' test returns somewhat weird result: ... PASS: direct NMI + hlt PASS: NMI intercept while running guest PASS: intercepted NMI + hlt PASS: nmi_hlt SUMMARY: 4 tests, 1 unexpected failures Trying to investigate where the failure is coming from I was tweaking the code around and with tiny meaningless changes I was able to observe #PF, #GP, #UD and other 'interesting' results. Compiler optimization flags also change the outcome so there's obviously a corruption somewhere. Adding a meaningless 'nop' to the second 'asm volatile ("hlt");' in nmi_hlt_test() saves the day so it seems we erroneously advance RIP twice, the advancement in nmi_hlt_finished() is not needed. The outcome, however, contradicts with the commit message in 7e7aa86f74 ("SVM: move guest past HLT"). With that commit reverted, all tests seem to pass but I'm not sure what issue the commit was trying to fix, thus RFC. This reverts commit 7e7aa86f7418a8343de46583977f631e55fd02ed. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200623082711.803916-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-24x86: Initialize segment selectorsNadav Amit1-6/+11
Currently, the BSP's segment selectors are not initialized in 32-bit (cstart.S). As a result the tests implicitly rely on the segment selector values that are set by the BIOS. If this assumption is not kept, the task-switch test fails. Fix it by initializing them. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200623084132.36213-1-namit@vmware.com> Reviewed-by: Jim Mattson <jmattson@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-24x86: skip hyperv_clock test when host clocksource is not TSCVitaly Kuznetsov1-0/+1
Hyper-V TSC page clocksource is TSC based so it requires host to use TSC for clocksource. While TSC is more or less standard for x86 hardware nowadays, when kvm-unit-tests are run in a VM the clocksource tends to be different (e.g. kvm-clock). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200617152139.402827-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-23Merge tag 'pull-request-2020-06-16' of https://gitlab.com/huth/kvm-unit-testsPaolo Bonzini8-15/+45
* Lots of CI-related fixes and improvements * Update the gitlab-CI to Fedora 32 * Test compilation with Clang
2020-06-23lib/alloc.c: fix missing includePaolo Bonzini1-0/+1
Include bitops.h to get BITS_PER_LONG and avoid errors such as lib/alloc.c: In function mult_overflow: lib/alloc.c:24:9: error: right shift count >= width of type [-Werror=shift-count-overflow] 24 | if ((a >> 32) && (b >> 32)) | ^~ Fixes: cde8415e1 ("lib/alloc.c: add overflow check for calloc") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22Fixes for the umip testThomas Huth1-2/+4
When compiling umip.c with -O2 instead of -O1, there are currently two problems. First, the compiler complains: x86/umip.c: In function ‘do_ring3’: x86/umip.c:162:37: error: array subscript 4096 is above array bounds of ‘unsigned char[4096]’ [-Werror=array-bounds] [user_stack_top]"m"(user_stack[sizeof user_stack]), ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~ This can be fixed by initializing the stack to point to one of the last bytes of the array instead. The second problem is that some tests are failing - and this is due to the fact that the GP_ASM macro uses inline asm without the "volatile" keyword - so that the compiler reorders this code in certain cases where it should not. Fix it by adding "volatile" here. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20200512094438.17998-1-thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22Fix out-of-tree buildsAndrew Jones1-5/+3
Since b16df9ee5f3b out-of-tree builds have been broken because we started validating the newly user-configurable $erratatxt file before linking it into the build dir. We fix this not by moving the validation, but by removing the linking and instead using the full path of the $erratatxt file. This allows one to keep that file separate from the src and build dirs. Fixes: b16df9ee5f3b ("arch-run: Add reserved variables to the default environ") Reported-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200511070641.23492-1-drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22lib/vmalloc: add locking and a check for initializationClaudio Imbrenda1-5/+11
Make sure init_alloc_vpage is never called when vmalloc is in use. Get both init_alloc_vpage and setup_vm to use the lock. For setup_vm we only check at the end because at least on some architectures setup_mmu can call init_alloc_vpage, which would cause a deadlock. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-9-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22lib/alloc_page: make get_order return unsigned intClaudio Imbrenda2-2/+2
Since get_order never returns a negative value, it makes sense to make it return an unsigned int. The returned value will be in practice always very small, a u8 would probably also do the trick. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-8-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22lib/vmalloc: fix potential race and non-standard pointer arithmeticClaudio Imbrenda1-2/+8
The pointer vfree_top should only be accessed with the lock held, so make sure we return a local copy of the pointer taken safely inside the lock. Also avoid doing pointer arithmetic on void pointers. Gcc allows it but it is ugly. Use uintptr_t for doing maths on the pointer. This will also come useful in upcoming patches. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-7-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22lib: Fix a typo and add documentation commentsClaudio Imbrenda2-1/+9
Fix a typo in lib/alloc_phys.h and add documentation comments to all functions in lib/vmalloc.h Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-6-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22lib/alloc.c: add overflow check for callocClaudio Imbrenda1-1/+35
Add an overflow check for calloc to prevent potential multiplication overflow. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-5-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22lib: use PAGE_ALIGNClaudio Imbrenda1-4/+4
Since now PAGE_ALIGN is available in all architectures, start using it in common code to improve readability. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-4-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22x86: add missing PAGE_ALIGN macro from page.hClaudio Imbrenda1-0/+2
The PAGE_ALIGN macro is present in all other page.h headers, including the generic one. This patch adds the missing PAGE_ALIGN macro to ib/x86/asm/page.h Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-3-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22x86/cstart.S: initialize stack before using itClaudio Imbrenda1-1/+1
It seems the 32-bit initialization code uses the stack before actually initializing it. Probably the boot loader leaves a reasonable value in the stack pointer so this issue has not been noticed before. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-2-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22nVMX: Extend EPT cap MSR test to allow 5-level EPTSean Christopherson1-0/+1
Modify the EMSR_IA32_VMX_EPT_VPID_CAP test to mark the 5-level EPT cap bit as allowed-1. KVM is in the process of gaining support for 5-level nested EPT[*]. [*] https://lkml.kernel.org/r/20200206220836.22743-1-sean.j.christopherson@intel.com Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200207174244.6590-5-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22nVMX: Mark bit 39 of MSR_IA32_VMX_EPT_VPID_CAP as reservedSean Christopherson1-1/+0
Remove bit 39, which is defined as reserved in Intel's SDM, from the set of allowed-1 bits in MSR_IA32_VMX_EPT_VPID_CAP. Fixes: 69c8d31 ("VMX: Validate capability MSRs") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200207174244.6590-4-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22nVMX: Refactor the EPT/VPID MSR cap check to make it readableSean Christopherson2-2/+22
Use the EPT_CAP_* and VPID_CAP_* defines to declare which bits are reserved in MSR_IA32_VMX_EPT_VPID_CAP. Encoding the reserved bits in a 64-bit literal is difficult to read, even more difficult to update, and error prone, as evidenced by the check allowing bit 39 to be '1', despite it being reserved to zero in Intel's SDM. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200207174244.6590-3-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22nVMX: Extend EPTP test to allow 5-level EPTSean Christopherson2-4/+9
Modify the EPTP test to expect success when the EPTP is configured for 5-level page walks and 5-level walks are enumerated as supported by the EPT capabilities MSR. KVM is in the process of gaining support for 5-level nested EPT[*]. [*] https://lkml.kernel.org/r/20200206220836.22743-1-sean.j.christopherson@intel.com Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200207174244.6590-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22SVM: add test for nested guest RIP corruptionMaxim Levitsky1-0/+102
This adds a unit test for SVM nested register corruption that happened when L0 emulated an instruction and just before injecting a vmexit, and upon vmexit the VMCB contained pre-emulation values of RAX, RIP and RSP. This test detects RIP corruption when RIP is at the start of the emulated instruction but the instruction was already executed. The upstream commit that fixed this bug is b6162e82aef19fee9c32cb3fe9ac30d9116a8c73 KVM: nSVM: Preserve registers modifications done before nested_svm_vmexit() Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20200622165533.145882-1-mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-22x86: fix build with GCC10Vitaly Kuznetsov2-2/+2
kvm-unit-tests fail to build with GCC10: /usr/bin/ld: lib/libcflat.a(usermode.o): ./kvm-unit-tests/lib/x86/usermode.c:17: multiple definition of `jmpbuf'; lib/libcflat.a(fault_test.o): ./kvm-unit-tests/lib/x86/fault_test.c:3: first defined here It seems that 'jmpbuf' doesn't need to be global in either of these files, make it static in both. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200617152124.402765-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-16s390x: stsi: Make output tap13 compatibleJanosch Frank1-3/+3
In tap13 output # is a special character and only "skip" and "todo" are allowed to come after it. Let's appease our CI environment and replace # with "count". Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Message-Id: <20200525084340.1454-1-frankja@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16x86: disable SSE on 32-bit hostsPaolo Bonzini1-0/+1
On 64-bit hosts we are disabling SSE and SSE2. Depending on the compiler however it may use movq instructions for 64-bit transfers even when targeting 32-bit processors; when CR4.OSFXSR is not set, this results in an undefined opcode exception, so tell the compiler to avoid those instructions on 32-bit hosts as well. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200616140217.104362-1-pbonzini@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16x86: disable SSE on 32-bit hostsPaolo Bonzini1-0/+1
On 64-bit hosts we are disabling SSE and SSE2. Depending on the compiler however it may use movq instructions for 64-bit transfers even when targeting 32-bit processors; when CR4.OSFXSR is not set, this results in an undefined opcode exception, so tell the compiler to avoid those instructions on 32-bit hosts as well. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-16Compile the kvm-unit-tests also with ClangThomas Huth1-0/+13
To get some more test coverage, let's check compilation with Clang, too. Message-Id: <20200514192626.9950-12-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16vmx_tests: Silence warning from ClangThomas Huth1-1/+1
Clang complains: x86/vmx_tests.c:8429:40: error: converting the result of '<<' to a boolean always evaluates to true [-Werror,-Wtautological-constant-compare] vmx_preemption_timer_zero_inject_db(1 << DB_VECTOR); ^ Looking at the code, the "1 << DB_VECTOR" is done within the function vmx_preemption_timer_zero_inject_db() indeed: vmcs_write(EXC_BITMAP, intercept_db ? 1 << DB_VECTOR : 0); ... so using "true" as parameter for the function should be appropriate here. Message-Id: <20200514192626.9950-11-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16x86: use inline asm to retrieve stack pointerBill Wendling1-2/+6
According to GCC's documentation, the only supported use for specifying registers for local variables is "to specify registers for input and output operands when calling Extended asm." Using it as a shortcut to get the value in a register isn't guaranteed to work, and clang complains that the variable is uninitialized. Signed-off-by: Bill Wendling <morbo@google.com> Message-Id: <20191030210419.213407-7-morbo@google.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16x86: use a non-negative number in shiftBill Wendling1-1/+1
Shifting a negative number is undefined. Clang complains about it: x86/svm.c:1131:38: error: shifting a negative signed value is undefined [-Werror,-Wshift-negative-value] test->vmcb->control.tsc_offset = TSC_OFFSET_VALUE; Using "~0ull" results in identical asm code: before: movabsq $-281474976710656, %rsi after: movabsq $-281474976710656, %rsi Signed-off-by: Bill Wendling <morbo@google.com> [thuth: Rebased to master - code is in svm_tests.c instead of svm.c now] Message-Id: <20200514192626.9950-9-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16Update the gitlab-ci to Fedora 32Thomas Huth1-2/+2
Fedora 30 is end of life, let's use the version 32 instead. Unfortunately, we have to disable taskswitch2 in the gitlab-ci now. It does not seem to work anymore with the latest version of gcc and/or QEMU. We still check it in the travis-ci, though, so until somebody has some spare time to debug this issue, it should be ok to disable it here. Message-Id: <20200514192626.9950-8-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16Fix powerpc issue with the linker from Fedora 32Thomas Huth1-3/+16
The linker from Fedora 32 complains: powerpc64-linux-gnu-ld: powerpc/selftest.elf: error: PHDR segment not covered by LOAD segment Let's introduce some fake PHDRs to the linker script to get this working again. Message-Id: <20200514192626.9950-7-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16Always compile the kvm-unit-tests with -fno-commonThomas Huth3-3/+3
The new GCC v10 uses -fno-common by default. To avoid that we commit code that declares global variables twice and thus fails to link with the latest version, we should also compile with -fno-common when using older versions of the compiler. However, this now also means that we can not play the trick with the common auxinfo struct anymore. Thus declare it as extern in the header now and link auxinfo.c on x86, too. Message-Id: <20200514192626.9950-6-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16Fixes for the umip testThomas Huth1-2/+4
When compiling umip.c with -O2 instead of -O1, there are currently two problems. First, the compiler complains: x86/umip.c: In function ‘do_ring3’: x86/umip.c:162:37: error: array subscript 4096 is above array bounds of ‘unsigned char[4096]’ [-Werror=array-bounds] [user_stack_top]"m"(user_stack[sizeof user_stack]), ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~ This can be fixed by initializing the stack to point to one of the last bytes of the array instead. The second problem is that some tests are failing - and this is due to the fact that the GP_ASM macro uses inline asm without the "volatile" keyword - so that the compiler reorders this code in certain cases where it should not. Fix it by adding "volatile" here. Message-Id: <20200122160944.29750-1-thuth@redhat.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16x86: avoid multiple defined symbolPaolo Bonzini2-2/+2
Fedora 32 croaks about a symbol that is defined twice, fix it. Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200511165959.42442-1-pbonzini@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16Fix out-of-tree buildsAndrew Jones1-5/+3
Since b16df9ee5f3b out-of-tree builds have been broken because we started validating the newly user-configurable $erratatxt file before linking it into the build dir. We fix this not by moving the validation, but by removing the linking and instead using the full path of the $erratatxt file. This allows one to keep that file separate from the src and build dirs. Fixes: b16df9ee5f3b ("arch-run: Add reserved variables to the default environ") Reported-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200511070641.23492-1-drjones@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-16x86/pmu: Fix compilation on 32-bit hostsThomas Huth1-1/+1
When building for 32-bit hosts, the compiler currently complains: x86/pmu.c: In function 'check_gp_counters_write_width': x86/pmu.c:490:30: error: left shift count >= width of type Use the correct suffix to avoid this problem. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20200616105940.2907-1-thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-10x86: always set up SMPPaolo Bonzini32-41/+6
Currently setup_vm cannot assume that it can invoke IPIs, and therefore only initializes CR0/CR3/CR4 on the CPU it runs on. In order to keep the initialization code clean, let's just call smp_init (and therefore setup_idt) unconditionally. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-10remove unused filePaolo Bonzini1-14/+0
Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-01x86: realmode: Add suffixes for push, pop and iretRoman Bolshakov1-16/+16
binutils 2.33 and 2.34 changed generation of PUSH and POP for segment registers and IRET in '.code16gcc' [1][2][3][4]. gas also yields the following warnings during the build of realmode.c: snip.s: Assembler messages: snip.s:2279: Warning: generating 32-bit `push', unlike earlier gas versions snip.s:2296: Warning: generating 32-bit `pop', unlike earlier gas versions snip.s:3633: Warning: generating 16-bit `iret' for .code16gcc directive This change fixes warnings and failures of the tests: push/pop 3 push/pop 4 iret 1 iret 3 1. https://sourceware.org/bugzilla/show_bug.cgi?id=24485 2. https://sourceware.org/git/?p=binutils-gdb.git;h=7cb22ff84745 3. https://sourceware.org/git/?p=binutils-gdb.git;h=06f74c5cb868 4. https://sourceware.org/git/?p=binutils-gdb.git;h=13e600d0f560 Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> Message-Id: <20200529212637.5034-1-r.bolshakov@yadro.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-29x86: pmu: Test full-width counter writes supportLike Xu2-24/+102
When the full-width writes capability is set, use the alternative MSR range to write larger sign counter values (up to GP counter width). Signed-off-by: Like Xu <like.xu@linux.intel.com> Message-Id: <20200529074347.124619-4-like.xu@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-29access: disable phys-bits=36 for nowPaolo Bonzini1-1/+1
Support for guest-MAXPHYADDR < host-MAXPHYADDR is not upstream yet, it should not be enabled. Otherwise, all the pde.36 and pte.36 fail and the test takes so long that it times out. Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-18README: Document steps to run the tests on macOSRoman Bolshakov2-2/+51
While at it, mention that hvf is a valid accel parameter. Cc: Cameron Esfahani <dirty@apple.com> Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Cameron Esfahani <dirty@apple.com> Message-Id: <20200320145541.38578-3-r.bolshakov@yadro.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-18scripts/arch-run: Support testing of hvf accelRoman Bolshakov1-0/+13
The tests can be run if Hypervisor.framework API is available: https://developer.apple.com/documentation/hypervisor?language=objc#1676667 Cc: Cameron Esfahani <dirty@apple.com> Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Cameron Esfahani <dirty@apple.com> Message-Id: <20200320145541.38578-2-r.bolshakov@yadro.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-18x86: realmode: Test interrupt delivery after STIRoman Bolshakov1-0/+21
If interrupts are disabled, STI is inhibiting interrupts for the instruction following it. If STI is followed by HLT, the CPU is going to handle all pending or new interrupts as soon as HLT is executed. Test if emulator properly clears inhibition state and allows the scenario outlined above. Cc: Cameron Esfahani <dirty@apple.com> Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> Message-Id: <20200329071125.79253-1-r.bolshakov@yadro.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-09svm_tests: add RSM intercept testPaolo Bonzini1-0/+49
This test is currently broken, but it passes under QEMU. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-09x86: VMX: Add a VMX-preemption timer expiration testJim Mattson3-0/+125
When the VMX-preemption timer is activated, code executing in VMX non-root operation should never be able to record a TSC value beyond the deadline imposed by adding the scaled VMX-preemption timer value to the first TSC value observed by the guest after VM-entry. Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Peter Shier <pshier@google.com> Message-Id: <20200508203938.88508-1-jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-09svm: Test V_IRQ injectionCathy Avery1-0/+150
Test V_IRQ injection from L1 to L2 with V_TPR less than or greater than V_INTR_PRIO. Also test VINTR intercept with differing V_TPR and V_INTR_PRIO. Signed-off-by: Cathy Avery <cavery@redhat.com> Message-Id: <20200509111622.2184-1-cavery@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-06KVM: VMX: add test for NMI delivery during HLTCathy Avery1-0/+120
Signed-off-by: Cathy Avery <cavery@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-05VMX: use xAPIC mode on all processorsPaolo Bonzini3-1/+18
Results are undefined if xAPIC/x2APIC mode is not homogeneous on all processors. So far things seemed to have mostly worked, but if you end up calling xapic_icr_write from an x2APIC-mode processor the write is eaten and the IPI is not delivered. Reported-by: Cathy Avery <cavery@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-04Merge tag 's390x-2020-04-30' of ↵Paolo Bonzini10-26/+196
https://github.com/davidhildenbrand/kvm-unit-tests New maintainer, reviewer, and cc list. New STSI test. Lots of minor fixes and cleanups
2020-05-04svm: Fix nmi hlt test to fail test correctlyCathy Avery1-0/+1
The last test does not return vmmcall on fail resulting in passing the entire test. Signed-off-by: Cathy Avery <cavery@redhat.com> Message-Id: <20200428184100.5426-1-cavery@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-04x86: msr: Don't test bits 63:32 of SYSENTER MSRs on 32-bit buildsSean Christopherson1-2/+3
Squish the "address" stuffed into SYSENTER_EIP/ESP into an unsigned long so as to drop bits 63:32 on 32-bit builds. VMX diverges from bare metal in the sense that the associated VMCS fields are natural width fields, whereas the actual MSRs hold 64-bit values, even on CPUs that don't support 64-bit mode. This causes the tests to fail if bits 63:32 are non-zero and a VM-Exit/VM-Enter occurs on and/or between WRMSR/RDMSR, e.g. when running the tests in L1 or deeper. Don't bother trying to actually test that bits 63:32 are dropped, the behavior depends on the (virtual) CPU capabilities, not the build, and the behavior is specific to VMX as both SVM and bare metal preserve the full 64-bit values. And because practically no one cares about 32-bit KVM, let alone an obscure architectural quirk that doesn't affect real world kernels. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200428231135.12903-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-04nVMX: Check EXIT_QUALIFICATION on VM-Enter failures due to bad guest stateSean Christopherson2-1/+9
Assert that vmcs.EXIT_QUALIFICATION contains the correct failure code on failed VM-Enter due to invalid guest state. Hardcode the expected code to the default code, '0', rather than passing in the expected code to minimize churn and boilerplate code, which works for all existing tests. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200424174025.1379-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-04x86: ioapic: Run physical destination mode test iff cpu_count() > 1Sean Christopherson1-1/+2
Make test_ioapic_physical_destination_mode() depending on having at least two CPUs as it sets ->dest_id to '1', i.e. expects CPU0 and CPU1 to exist. This analysis is backed up by the fact that the test was originally gated by cpu_count() > 1. Fixes: dcf27dc5b5499 ("x86: Fix the logical destination mode test") Cc: Nitesh Narayan Lal <nitesh@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423195050.26310-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-04x86: nVMX: add new test for vmread/vmwrite flags preservationSimon Smith1-0/+140
This commit adds new unit tests for commit a4d956b93904 ("KVM: nVMX: vmread should not set rflags to specify success in case of #PF") The two new tests force a vmread and a vmwrite on an unmapped address to cause a #PF and verify that the low byte of %rflags is preserved and that %rip is not advanced. The commit fixed a bug in vmread, but we include a test for vmwrite as well for completeness. Before the aforementioned commit, the ALU flags would be incorrectly cleared and %rip would be advanced (for vmread). Signed-off-by: Simon Smith <brigidsmith@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Reviewed-by: Peter Shier <pshier@google.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Oliver Upton <oupton@google.com> Message-Id: <20200420175834.258122-1-brigidsmith@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-30s390x: Fix library constant definitionsJanosch Frank1-4/+4
Seems like I uppercased the whole region instead of only the ULs when I added those definitions. Let's make the x lowercase again. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20200429143518.1360468-11-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-30s390x: smp: Add restart when running testJanosch Frank1-0/+29
Let's make sure we can restart a cpu that is already running. Restarting it if it is stopped is implicitely tested by the the other restart calls in the smp test. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20200429143518.1360468-10-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-30s390x: smp: Use full PSW to bringup new cpuJanosch Frank2-1/+4
Up to now we ignored the psw mask and only used the psw address when bringing up a new cpu. For DAT we need to also load the mask, so let's do that. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20200429143518.1360468-8-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-30s390x: smp: Remove unneeded cpu loopsJanosch Frank1-7/+1
Now that we have a loop which is executed after we return from the main function of a secondary cpu, we can remove the surplus loops. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Message-Id: <20200429143518.1360468-7-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-30s390x: smp: Loop if secondary cpu returns into cpu setup againJanosch Frank1-1/+3
Up to now a secondary cpu could have returned from the function it was executing and ending up somewhere in cstart64.S. This was mostly circumvented by an endless loop in the function that it executed. Let's add a loop to the end of the cpu setup, so we don't have to rely on added loops in the tests. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20200429143518.1360468-6-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-30s390x: smp: Test local interrupts after cpu resetJanosch Frank1-0/+21
Local interrupts (external and emergency call) should be cleared after any cpu reset. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20200429143518.1360468-5-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-30s390x: smp: Test stop and store status on a running and stopped cpuJanosch Frank1-0/+14
Let's also test the stop portion of the "stop and store status" sigp order. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20200429143518.1360468-4-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-30s390x: smp: Dirty fpc before initial reset testJanosch Frank1-0/+1
Let's dirty the fpc, before we test if the initial reset sets it to 0. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20200429143518.1360468-3-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24s390x: smp: Test all CRs on initial resetJanosch Frank1-1/+17
All CRs are set to 0 and CRs 0 and 14 are set to pre-defined values, so we also need to test 1-13 and 15 for 0. And while we're at it, let's also set some values to cr 1, 7 and 13, so we can actually be sure that they will be zeroed. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20200424093356.11931-1-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24s390x: unittests: Use smp parameterAndrew Jones1-1/+1
Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Message-Id: <20200403094015.506838-1-drjones@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24s390x/smp: add minimal test for sigp sense running statusChristian Borntraeger3-2/+16
Two minimal tests: - our own CPU should be running when we check ourselves - a CPU should at least have some times with a not running indication. To speed things up we stop CPU1 Also rename smp_cpu_running to smp_sense_running_status. Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <20200402154441.13063-1-borntraeger@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24s390x: STFLE operates on doublewordsDavid Hildenbrand2-8/+8
STFLE operates on doublewords, not bytes. Passing in "256" resulted in some ignored bits getting set. Not bad, but also not clean. Let's just convert our stfle handling code to operate on doublewords. Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Message-Id: <20200401163305.31550-1-david@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24s390x/smp: fix detection of "running"Christian Borntraeger1-1/+1
On s390x hosts with a single CPU, the smp test case hangs (loops). The check if our restart has finished is wrong. Sigp sense running status checks if the CPU is currently backed by a real CPU. This means that on single CPU hosts a sigp sense running will never claim that a target is running. We need to check for not being stopped instead. Reviewed-by: Janosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <20200330084911.34248-2-borntraeger@de.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24s390x: Add stsi 3.2.2 testsJanosch Frank2-0/+74
Subcode 3.2.2 is handled by KVM/QEMU and should therefore be tested a bit more thoroughly. In this test we set a custom name and uuid through the QEMU command line. Both parameters will be passed to the guest on a stsi subcode 3.2.2 call and will then be checked. We also compare the configured cpu numbers against the smp reported numbers and if the reserved + configured add up to the total number reported. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20200331071456.3302-1-frankja@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24MAINTAINERS: s390x: add linux-s390 listCornelia Huck1-0/+1
It makes sense to cc: patches there as well. Signed-off-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20200324121722.9776-3-cohuck@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24MAINTAINERS: s390x: add myself as reviewerCornelia Huck1-0/+1
Signed-off-by: Cornelia Huck <cohuck@redhat.com> Acked-by: Janosch Frank <frankja@linux.ibm.com> Message-Id: <20200324121722.9776-2-cohuck@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-24MAINTAINERS: Add Janosch as a s390x maintainerThomas Huth1-1/+1
Both, David and I, often do not have as much spare time for the kvm-unit-tests as we would like to have, so we could use a little bit of additional help here. Janosch did some excellent work for the s390x kvm-unit-tests in the past months and is listed as reviewer for these patches since quite a while already, so he's a very well suited for the maintainer job here, too. Signed-off-by: Thomas Huth <thuth@redhat.com> Acked-by: Janosch Frank <frankja@de.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Message-Id: <20200205101935.19219-1-thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2020-04-23SVM: move guest past HLTPaolo Bonzini1-0/+8
On AMD, the guest is not woken up from HLT by the interrupt or NMI vmexits. Therefore we have to fix up the RIP manually. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-23nVMX: Add testcase to cover VMWRITE to nonexistent CR3-target valuesSean Christopherson1-0/+4
Enhance test_cr3_targets() to verify that attempting to write CR3-target value fields beyond the reported number of supported targets fails. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200416162814.32065-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-23x86: svm: call default_prepare from exc_inject_preparePaolo Bonzini1-2/+3
Otherwise, the exc_inject fails if passed first on the command line. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-23x86: VMX: Add another corner-case VMX-preemption timer testJim Mattson1-0/+104
Ensure that the delivery of a "VMX-preemption timer expired" VM-exit doesn't disrupt single-stepping in the guest. Note that passing this test doesn't ensure correctness. Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Oliver Upton <oupton@google.com> Reviewed-by: Peter Shier <pshier@google.com> Message-Id: <20200414001026.50051-2-jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-23x86: nVMX: Add some corner-case VMX-preemption timer testsJim Mattson1-0/+120
Verify that both injected events and debug traps that result from pending debug exceptions take precedence over a "VMX-preemption timer expired" VM-exit resulting from a zero-valued VMX-preemption timer. Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Oliver Upton <oupton@google.com> Reviewed-by: Peter Shier <pshier@google.com> Message-Id: <20200414001026.50051-1-jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-23x86: access: Add tests for reserved bits of guest physical addressMohammed Gamal2-4/+32
This extends the access tests to also test for reserved bits in guest physical addresses. Signed-off-by: Mohammed Gamal <mgamal@redhat.com> Message-Id: <20200423103623.431206-1-mgamal@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-21kvm-unit-tests: nSVM: Test that CR0[63:32] are not set on VMRUN of nested guestsKrish Sadhukhan1-0/+14
According to section "Canonicalization and Consistency Checks" in APM vol. 2, the following guest state is illegal: "CR0[63:32] are not zero." Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20200420225825.3184-2-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-17x86: VMX: test MTF VM-exit event injectionOliver Upton1-2/+11
SDM 26.6.2 describes how the VM-entry interruption-information field may be configured to inject an MTF VM-exit upon VM-entry. Ensure that an MTF VM-exit occurs when the VM-entry interruption-information field is configured appropriately by the host. Signed-off-by: Oliver Upton <oupton@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Reviewed-by: Peter Shier<pshier@google.com> Message-Id: <20200414214634.126508-2-oupton@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-16kvm-unit-tests: nSVM: Test CR0.CD and CR0.NW combination on VMRUN of nested ↵Krish Sadhukhan1-1/+27
guests According to section "Canonicalization and Consistency Checks" in APM vol. 2, the following guest state combination is illegal: "CR0.CD is zero and CR0.NW is set" Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20200409205035.16830-4-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-16kvm-unit-tests: SVM: Add #defines for CR0.CD and CR0.NWKrish Sadhukhan1-0/+2
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20200409205035.16830-3-krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-15svm: add a test for exception injectionPaolo Bonzini1-0/+70
Cover VMRUN's testing whether EVENTINJ.TYPE = 3 (exception) has been specified with a vector that does not correspond to an exception. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-09svm: Add test cases around NMI injection with HLTCathy Avery1-0/+103
This test checks for NMI delivery to L2 and intercepted NMI (VMEXIT_NMI) delivery to L1 during an active HLT. Signed-off-by: Cathy Avery <cavery@redhat.com> Message-Id: <20200409133247.16653-3-cavery@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-09svm: Add test cases around NMI injectionCathy Avery1-0/+82
This test checks for NMI delivery to L2 and intercepted NMI (VMEXIT_NMI) delivery to L1. Signed-off-by: Cathy Avery <cavery@redhat.com> Message-Id: <20200409133247.16653-2-cavery@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-07arch-run: Add reserved variables to the default environAndrew Jones3-50/+97
Add the already reserved (see README) variables to the default environ. To do so neatly we rework the environ creation a bit too. mkstandalone also learns to honor config.mak as to whether or not to make environs, and we allow the $ERRATATXT file to be selected at configure time. Signed-off-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200407113312.65587-1-drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-07runtime: Always honor the unittests.cfg accel requirementAndrew Jones1-1/+8
If the unittests.cfg file specifies an accel parameter then don't let the user override it. Signed-off-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200404154739.217584-3-drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-07run_migration: Implement our own waitAndrew Jones1-1/+5
Bash 5.0 changed 'wait' with no arguments to also wait for all process substitutions. For example, with Bash 4.4 this completes, after waiting for the sleep ( sleep 1 & wait ) > >(tee /dev/null) but with Bash 5.0 it does not. The kvm-unit-tests (overly) complex bash scripts have a 'run_migration ... 2> >(tee /dev/stderr)' where the '2> >(tee /dev/stderr)' comes from 'run_qemu'. Since 'run_migration' calls 'wait' it will never complete with Bash 5.0. Resolve by implementing our own wait; just a loop on job count. Signed-off-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200404154739.217584-2-drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-04-04arm/arm64: ITS: pending table migration testEric Auger3-0/+169
Add two new migration tests. One testing the migration of a topology where collection were unmapped. The second test checks the migration of the pending table. Signed-off-by: Eric Auger <eric.auger@redhat.com> [ Complete migration even when the test is skipped. Otherwise the migration scripts hang. Also, without the KVM fix for unmapped collections, migration will fail and the test will hang, so use errata to skip it instead. ] Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-04arm/arm64: ITS: migration testsEric Auger4-6/+91
This test maps LPIs (populates the device table, the collection table, interrupt translation tables, configuration table), migrates and make sure the translation is correct on the destination. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> [ Complete migration even when the test is skipped. Otherwise the migration scripts hang. ] Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/run: Allow Migration testsEric Auger3-2/+30
Let's link getchar.o to use puts and getchar from the tests. Then allow tests belonging to the migration group to trigger the migration from the test code by putting "migrate" into the uart. Then the code can wait for the migration completion by using getchar(). The __getchar implement is minimalist as it just reads the data register. It is just meant to read the single character emitted at the end of the migration by the runner script. It is not meant to read more data (FIFOs are not enabled). Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: ITS: INT functional testsEric Auger2-11/+213
Triggers LPIs through the INT command. the test checks the LPI hits the right CPU and triggers the right LPI intid, ie. the translation is correct. Updates to the config table also are tested, along with inv and invall commands. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: ITS: CommandsEric Auger3-1/+515
Implement main ITS commands. The code is largely inherited from the ITS driver. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: ITS: Device and collection InitializationEric Auger2-0/+57
Introduce an helper functions to register - a new device, characterized by its device id and the max number of event IDs that dimension its ITT (Interrupt Translation Table). The function allocates the ITT. - a new collection, characterized by its ID and the target processing engine (PE). Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: ITS: its_enable_defaultsEric Auger4-0/+45
its_enable_defaults() enable LPIs at redistributor level and ITS level. gicv3_enable_defaults must be called before. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: ITS: Introspection testsEric Auger8-7/+311
Detect the presence of an ITS as part of the GICv3 init routine, initialize its base address and read few registers the IIDR, the TYPER to store its dimensioning parameters. Parse the BASER registers. As part of the init sequence we also init all the requested tables. This is our first ITS test, belonging to a new "its" group. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: gicv3: Set the LPI config and pending tablesEric Auger2-0/+70
Allocate the LPI configuration and per re-distributor pending table. Set redistributor's PROPBASER and PENDBASER. The LPIs are enabled by default in the config table. Also introduce a helper routine that allows to set the pending table bit for a given LPI and macros to set/get its configuration. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: gicv3: Add some re-distributor definesEric Auger1-0/+6
PROPBASER, PENDBASE and GICR_CTRL will be used for LPI management. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: gic: Introduce setup_irq() helperEric Auger2-13/+8
ipi_enable() code would be reusable for other interrupts than IPI. Let's rename it setup_irq() and pass an interrupt handler pointer. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03page_alloc: Introduce get_order()Eric Auger2-1/+7
Compute the power of 2 order of a size. Use it in page_memalign. Other users are looming. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03libcflat: Add other size definesEric Auger1-0/+3
Introduce additional SZ_256, SZ_8K, SZ_16K macros that will be used by ITS tests. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Test overflow interruptsEric Auger2-0/+145
Test overflows for MEM_ACCESS and SW_INCR events. Also tests overflows with 64-bit events. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: gic: Introduce gic_irq_set_clr_enable() helperEric Auger2-0/+35
Allows to set or clear the enable state of a PPI/SGI/SPI. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: test 32-bit <-> 64-bit transitionsEric Auger2-0/+144
Test configurations where we transit from 32b to 64b counters and conversely. Also tests configuration where chain counters are configured but only one counter is enabled. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Test chained countersEric Auger2-1/+109
Add 2 tests exercising chained counters. The first one uses CPU_CYCLES and the second one uses SW_INCR. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Test SW_INCR event countEric Auger2-0/+53
Add tests dedicated to SW_INCR event counting. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Basic event counter TestsEric Auger3-0/+271
Adds the following tests: - event-counter-config: test event counter configuration - basic-event-count: - programs counters #0 and #1 to count 2 required events (resp. CPU_CYCLES and INST_RETIRED). Counter #0 is preset to a value close enough to the 32b overflow limit so that we check the overflow bit is set after the execution of the asm loop. - mem-access: counts MEM_ACCESS event on counters #0 and #1 with and without 32-bit overflow. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Check Required Event SupportEric Auger3-0/+86
If event counters are implemented check the common events required by the PMUv3 are implemented. Some are unconditionally required (SW_INCR, CPU_CYCLES, either INST_RETIRED or INST_SPEC). Some others only are required if the implementation implements some other features. Check those wich are unconditionally required. This test currently fails on TCG as neither INST_RETIRED or INST_SPEC are supported. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Introduce defines for PMU versionsEric Auger1-5/+21
Introduce some defines encoding the different PMU versions. v3 is encoded differently in 32 and 64 bits. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Add a pmu structEric Auger1-8/+21
This struct aims at storing information potentially used by all tests such as the pmu version, the read-only part of the PMCR, the number of implemented event counters, ... Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Don't check PMCR.IMP anymoreEric Auger1-25/+14
check_pmcr() checks the IMP field is different than 0. However A zero IMP field is permitted by the architecture, meaning the MIDR_EL1 should be looked at instead. This causes TCG to fail this test on '-cpu max' because in that case PMCR.IMP is set equal to MIDR_EL1.Implementer which is 0. So let's remove the check_pmcr() test and just print PMCR info in the pmu_probe() function. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm: pmu: Let pmu tests take a sub-test parameterEric Auger2-12/+20
As we intend to introduce more PMU tests, let's add a sub-test parameter that will allow to categorize them. Existing tests are in the cycle-counter category. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: Provide read/write_sysreg_sAndrew Jones1-0/+11
Sometimes we need to test access to system registers which are missing assembler mnemonics. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
2020-04-03arm64: timer: Speed up gic-timer-state checkAndrew Jones3-29/+63
Let's bail out of the wait loop if we see the expected state to save over six seconds of run time. Make sure we wait a bit before reading the registers and double check again after, though, to somewhat mitigate the chance of seeing the expected state by accident. We also take this opportunity to push more IRQ state code to the library. Cc: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Tested-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: Use existing helpers to access counter/timersZenghui Yu1-8/+8
We already have some good helpers to access the counter and timer registers. Use them to avoid open coding the accessors again. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: Use the proper RDist register name in GICv3Zenghui Yu2-2/+6
We're actually going to read GICR_ISACTIVER0 and GICR_ISPENDR0 (in SGI_base frame of the redistribitor) to get the active/pending state of the timer interrupt. Fix this typo. And since they have the same value, there's no functional change. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: gic: Move gic_state enumeration to asm/gic.hZenghui Yu2-7/+7
The status of each interrupt are defined by the GIC architecture and maintained by GIC hardware. They're not specified to the timer HW. Let's move this software enumeration to a more proper place. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: Perform dcache clean + invalidate after turning MMU offAlexandru Elisei5-0/+78
When the MMU is off, data accesses are to Device nGnRnE memory on arm64 [1] or to Strongly-Ordered memory on arm [2]. This means that the accesses are non-cacheable. Perform a dcache clean to PoC so we can read the newer values from the cache after we turn the MMU off, instead of the stale values from memory. Perform an invalidation so we can access the data written to memory after we turn the MMU back on. This prevents reading back the stale values we cleaned from the cache when we turned the MMU off. Data caches are PIPT and the VAs are translated using the current translation tables, or an identity mapping (what Arm calls a "flat mapping") when the MMU is off [1, 2]. Do the clean + invalidate when the MMU is off so we don't depend on the current translation tables and we can make sure that the operation applies to the entire physical memory. The patch was tested by hacking arm/selftest.c: +#include <alloc_page.h> +#include <asm/mmu.h> int main(int argc, char **argv) { + int *x = alloc_page(); + report_prefix_push("selftest"); + *x = 0x42; + mmu_disable(); + report(*x == 0x42, "read back value written with MMU on"); + *x = 0x50; + mmu_enable(current_thread_info()->pgtable); + report(*x == 0x50, "read back value written with MMU off"); + if (argc < 2) report_abort("no test specified"); Without the fix, the first report fails, and the test usually hangs before the second report. This is because mmu_enable pushes the LR register on the stack when the MMU is off, which means that the value will be written to memory. However, after asm_mmu_enable, the MMU is enabled, and we read it back from the dcache, thus getting garbage. With the fix, the two reports pass. [1] ARM DDI 0487E.a, section D5.2.9 [2] ARM DDI 0406C.d, section B3.2.1 Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: Test behavior when timer disabled or maskedAlexandru Elisei2-1/+8
When the timer is disabled (the *_CTL_EL0.ENABLE bit is clear) or the timer interrupt is masked at the timer level (the *_CTL_EL0.IMASK bit is set), timer interrupts must not be pending or asserted by the VGIC. However, only when the timer interrupt is masked, we can still check that the timer condition is met by reading the *_CTL_EL0.ISTATUS bit. This test was used to discover a bug and test the fix introduced by KVM commit 16e604a437c8 ("KVM: arm/arm64: vgic: Reevaluate level sensitive interrupts on enable"). Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: Check the timer interrupt stateAlexandru Elisei1-4/+11
We check that the interrupt is pending (or not) at the GIC level, but we don't check if the timer is asserting it (or not). Let's make sure we don't run into a strange situation where the two devices' states aren't synchronized. Coincidently, the "interrupt signal no longer pending" test fails for non-emulated timers (i.e, the virtual timer on a non-vhe host) if the host kernel doesn't have patch 16e604a437c89 ("KVM: arm/arm64: vgic: Reevaluate level sensitive interrupts on enable"). Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: Wait for the GIC to sample timer interrupt stateAlexandru Elisei2-6/+39
There is a delay between the timer asserting the interrupt and the GIC sampling the interrupt state. Let's take that into account when we are checking if the timer interrupt is pending (or not) at the GIC level. An interrupt can be pending or active and pending [1,2]. Let's be precise and check that the interrupt is actually pending, not active and pending. [1] ARM IHI 0048B.b, section 1.4.1 [2] ARM IHI 0069E, section 1.2.2 Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: EOIR the interrupt after masking the timerAlexandru Elisei1-3/+4
Writing to the EOIR register before masking the HW mapped timer interrupt can cause taking another timer interrupt immediately after exception return. This doesn't happen all the time, because KVM reevaluates the state of pending HW mapped level sensitive interrupts on each guest exit. If the second interrupt is pending and a guest exit occurs after masking the timer interrupt and before the ERET (which restores PSTATE.I), then KVM removes it. Move the write after the IMASK bit has been set to prevent this from happening. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: Make irq_received volatileAlexandru Elisei1-1/+1
The irq_received field is modified by the interrupt handler. Make it volatile so that the compiler doesn't reorder accesses with regard to the instruction that will be causing the interrupt. Suggested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: Add ISB before reading the counter valueAlexandru Elisei1-0/+2
Reads of the physical counter and the virtual counter registers "can occur speculatively and out of order relative to other instructions executed on the same PE" [1, 2]. There is no theoretical limit to the number of instructions that the CPU can reorder and we use the counter value to program the timer to fire in the future. Add an ISB before reading the counter to make sure the read instruction is not reordered too long in the past with regard to the instruction that programs the timer alarm, thus causing the timer to fire unexpectedly. This matches what Linux does (see arch/arm64/include/asm/arch_timer.h). Because we use the counter value to program the timer, we create a register dependency [3] between the value that we read and the value that we write to CVAL and thus we don't need a barrier after the read. Linux does things differently because the read needs to be ordered with regard to a memory load (more information in commit 75a19a0202db ("arm64: arch_timer: Ensure counter register reads occur with seqlock held")). This also matches what we already do in get_cntvct from lib/arm{,64}/asm/processor.h. [1] ARM DDI 0487E.a, section D11.2.1 [2] ARM DDI 0487E.a, section D11.2.2 [3] ARM DDI 0486E.a, section B2.3.2 Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm64: timer: Add ISB after register writesAlexandru Elisei1-3/+6
From ARM DDI 0487E.a glossary, the section "Context synchronization event": "All direct and indirect writes to System registers that are made before the Context synchronization event affect any instruction, including a direct read, that appears in program order after the instruction causing the Context synchronization event." The ISB instruction is a context synchronization event [1]. Add an ISB after all register writes, to make sure that the writes have been completed when we try to test their effects. [1] ARM DDI 0487E.a, section C6.2.96 Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03arm/arm64: psci: Don't run C code without stack or vectorsAlexandru Elisei1-3/+11
The psci test performs a series of CPU_ON/CPU_OFF cycles for CPU 1. This is done by setting the entry point for the CPU_ON call to the physical address of the C function cpu_psci_cpu_die. The compiler is well within its rights to use the stack when generating code for cpu_psci_cpu_die. However, because no stack initialization has been done, the stack pointer is zero, as set by KVM when creating the VCPU. This causes a data abort without a change in exception level. The VBAR_EL1 register is also zero (the KVM reset value for VBAR_EL1), the MMU is off, and we end up trying to fetch instructions from address 0x200. At this point, a stage 2 instruction abort is generated which is taken to KVM. KVM interprets this as an instruction fetch from an I/O region, and injects a prefetch abort into the guest. Prefetch abort is a synchronous exception, and on guest return the VCPU PC will be set to VBAR_EL1 + 0x200, which is... 0x200. The VCPU ends up in an infinite loop causing a prefetch abort while fetching the instruction to service the said abort. To avoid all of this, lets use the assembly function halt as the CPU_ON entry address. Also, expand the check to test that we only get PSCI_RET_SUCCESS exactly once, as we're never offlining the CPU during the test. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-04-03Makefile: Use no-stack-protector compiler optionsAlexandru Elisei1-2/+2
Let's fix the typos so that the -fno-stack-protector and -fno-stack-protector-all compiler options are actually used. Tested by compiling for arm64, x86_64 and ppc64 little endian. Before the patch, the arguments were missing from the gcc invocation; after the patch, they were present. Also fixes a compilation error that I was seeing with aarch64 gcc version 9.2.0, where the linker was complaining about an undefined reference to the symbol __stack_chk_guard. Fixes: e5c73790f5f0 ("build: don't reevaluate cc-option shell command") CC: Paolo Bonzini <pbonzini@redhat.com> CC: Drew Jones <drjones@redhat.com> CC: Laurent Vivier <lvivier@redhat.com> CC: Thomas Huth <thuth@redhat.com> CC: David Hildenbrand <david@redhat.com> CC: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
2020-03-31x86: vmx: skip atomic_switch_overflow_msrs_test on bare metalNadav Amit1-1/+4
The test atomic_switch_overflow_msrs_test is only expected to pass on KVM. Skip the test when the debug device is not supported to avoid failures on bare-metal. Cc: Marc Orr <marcorr@google.com> Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200321050616.4272-1-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-31x86: vmx: Fix "EPT violation - paging structure" testNadav Amit1-8/+9
Running the tests with more than 4GB of memory results in the following failure: FAIL: EPT violation - paging structure It appears that the test mistakenly used get_ept_pte() to retrieve the guest PTE, but this function is intended for accessing EPT and not the guest page tables. Use get_pte_level() instead of get_ept_pte(). Tested on bare-metal only. Signed-off-by: Nadav Amit <namit@vmware.com> Message-Id: <20200321050555.4212-1-namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-31x86: access: Shadow CR0, CR4 and EFER to avoid unnecessary VM-ExitsSean Christopherson1-18/+27
Track the last known CR0, CR4, and EFER values in the access test to avoid taking a VM-Exit on every. single. test. The EFER VM-Exits in particular absolutely tank performance when running the test in L1. Opportunistically tweak the 5-level test to print that it's starting before configuring 5-level page tables, e.g. in case enabling 5-level paging runs into issues. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200310035432.3447-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-31x86: Reload SS when switching to 5-level page tablesSean Christopherson1-0/+3
Load SS with a valid segment when switching to 5-level page tables to avoid taking a #SS due to a NULL segment when making a CALL with paging disabled. The "access" test calls setup_5level_page_table()/switch_to_5level() after generating and handling usermode exceptions. Per Intel's SDM, SS is nullified on an exception that changes CPL: The new SS is set to NULL if there is a change in CPL. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200310034729.2941-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-31x86: Run unit tests with --no-reboot so shutdowns show up as failuresSean Christopherson1-1/+1
Run tests with --no-reboot so that triple fault shutdowns get reported as failures. By default, Qemu automatically reboots on shutdown, i.e. automatically restarts the test, eventually leading to a timeout. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200310025249.30961-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18kvm-unit-test: nVMX: Test GUEST_BNDCFGS VM-Entry control on vmentry of ↵Krish Sadhukhan1-0/+45
nested guests According to section "Checks on Guest Control Registers, Debug Registers, and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry of nested guests: If the "load IA32_BNDCFGS" VM-entry control is 1, the following checks are performed on the field for the IA32_BNDCFGS MSR: — Bits reserved in the IA32_BNDCFGS MSR must be 0. — The linear address in bits 63:12 must be canonical. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18kvm-unit-test: nSVM: Test SVME.EFER on VMRUN of nested guestsKrish Sadhukhan1-0/+22
According to the section "Canonicalization and Consistency Checks" in 15.5.1 in APM vol 2, setting EFER.SVME to zero is an illegal guest state and will cause the nested guest to VMEXIT to the guest with an exit code of VMEXIT_INVALID. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18kvm-unit-test: nSVM: Add alternative (v2) test format for nested guestsKrish Sadhukhan3-18/+64
..so that we can add tests such as VMCB consistency tests, that require the tests to only proceed up to the execution of the first guest (nested) instruction and do not require us to define all the functions that the current format dictates. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18svm: move VMCB out of struct svm_testPaolo Bonzini3-89/+89
This will simplify accesses from v2 tests. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18Merge branch 'restructure-svm' into HEADPaolo Bonzini4-1651/+1703
2020-03-18kvm-unit-test: nSVM: Restructure nSVM test codeKrish Sadhukhan4-1499/+1551
..so that it matches its counterpart nVMX. This restructuring effort separates the test framework from the tests and puts them in different files. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-18x86: vmx: Expect multiple error codes on HOST_EFER corruptionNadav Amit1-1/+23
Extended HOST_EFER tests can fail with a different error code than the expected one, since the host address size bit is checked against EFER.LMA. This causes kvm-unit-tests to fail on bare metal. According to the SDM the errors are not ordered. Expect either "invalid control" or "invalid host state" error-codes to allow the tests to pass. The fix somewhat relaxes the tests, as there are cases when only "invalid host state" is a valid instruction error, but doing the fix in this manner prevents intrusive changes. Fixes: a22d7b5534c2 ("x86: vmx_tests: extend HOST_EFER tests") Signed-off-by: Nadav Amit <namit@vmware.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-17kvm-unit-test: VMX: Add enum for GUEST_BNDCFGS field and LOAD_BNDCFGS ↵Krish Sadhukhan1-0/+2
vmentry control field Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-17x86: Fix the logical destination mode testNitesh Narayan Lal2-5/+8
There are following issues with the ioapic logical destination mode test: - A race condition that is triggered when the interrupt handler   ioapic_isr_86() is called at the same time by multiple vCPUs. Due to this the g_isr_86 is not correctly incremented. To prevent this a spinlock is added around ‘g_isr_86++’. - On older QEMU versions initial x2APIC ID is not set, that is why   the local APIC IDs of each vCPUs are not configured. Hence the logical   destination mode test fails/hangs. Adding ‘+x2apic’ to the qemu -cpu params   ensures that the local APICs are configured every time, irrespective of the   QEMU version. - With ‘-machine kernel_irqchip=split’ included in the ioapic test   test_ioapic_self_reconfigure() always fails and somehow leads to a state where   after submitting IOAPIC fixed delivery - logical destination mode request we   never receive an interrupt back. For now, the physical and logical destination   mode tests are moved above test_ioapic_self_reconfigure(). Fixes: b2a1ee7e ("kvm-unit-test: x86: ioapic: Test physical and logical destination mode") Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-14nVMX: Pass exit reason enum to print_vmexit_info()Sean Christopherson3-29/+28
Take the exit reason as a parameter when printing VM-Exit info instead of rereading it from the VMCS. Opportunistically clean up the related printing. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-14nVMX: Pass exit reason union to is_hypercall()Sean Christopherson1-12/+7
Pass the exit reason captured in VM-Entry results into __enter_guest()'s helpers to simplify code and eliminate extra VMREADS. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-14nVMX: Pass exit reason union to v1 exit handlersSean Christopherson3-115/+88
Pass the recently introduce "union exit_reason" to the v1 exit handlers and use it in lieu of a manual VMREAD of the exit reason. Opportunistically fix a variety of warts in the handlers, e.g. grabbing only bits 7:0 of the exit reason. Modify the "Unknown exit reason" prints to display the exit reason in hex format to make a failed VM-Entry more recognizable. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-14nVMX: Expose __enter_guest() and consolidate guest state test codeSean Christopherson3-92/+40
Expose __enter_guest() outside of vmx.c and use it in a new wrapper for testing guest state. Handling both success and failure paths in a common helper eliminates a lot of boilerplate code in the tests themselves. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-14nVMX: Drop redundant check for guest terminationSean Christopherson1-10/+0
Remove the check_for_guest_termination() call in enter_guest_with_bad_controls() as __enter_guest() unconditionally performs the check if VM-Enter is successful (and aborts on failed VM-Entry for the ...bad_controls() variant). Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-14nVMX: Consolidate non-canonical code in test_canonical()Sean Christopherson1-23/+17
Refactor test_canonical() to provide a single flow for the non-canonical path. Practically speaking, its extremely unlikely the field being tested already has a non-canonical address, and even less likely that it's anything other than NONCANONICAL. I.e. the added complexity doesn't come with added coverage. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-14nVMX: Refactor VM-Entry "failure" struct into "result"Sean Christopherson3-79/+112
Rename "struct vmentry_failure" to "struct vmentry_result" and add the full VM-Exit reason to the result. Implement exit_reason as a union so that tests can easily pull out the parts of interest, e.g. basic exit reason, whether VM-Entry failed, etc... Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-14nVMX: Eliminate superfluous entry_failure_handler() wrapperSean Christopherson1-16/+3
Check and invoke the current entry failure handler directly from vmx_run() to eliminate an unnecessary layer and its stale comment. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-05x86: Move definition of some exception vectors into processor.hXiaoyao Li4-5/+5
Both processor.h and desc.h hold some definitions of exception vector. put them together in processor.h Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-05svm: Add test cases around interrupt injection and haltingCathy Avery1-0/+141
This test checks for interrupt delivery to L2 and unintercepted hlt in L2. All tests are performed both with direct interrupt injection and external interrupt interception. Based on VMX test by Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Cathy Avery <cavery@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-05svm: rename and comment the pending_event_vmask testPaolo Bonzini1-9/+16
Both the pending_event and pending_event_vmask test are using the V_INTR_MASKING field. The difference is that pending_event_vmask runs with host IF cleared, and therefore does not expect INTR vmexits. Rename the test to clarify this, and add comments to explain what's going on. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-05svm: allow specifying the tests to be runPaolo Bonzini1-1/+45
Copy over the test_wanted machinery from vmx.c. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-28x86: VMX: the "noclone" attribute is not neededPaolo Bonzini1-1/+1
Don't use the "noclone" attribute as it's not needed. Also, clang does not support it. Reported-by: Bill Wendling <morbo@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-28svm: change operand to output-only for matching constraintBill Wendling1-2/+2
According to GNU extended asm documentation, "the two operands [of matching constraints] must include one input-only operand and one output-only operand." So remove the read/write modifier from the output constraint. Signed-off-by: Bill Wendling <morbo@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-28x86: realmode: syscall: add explicit size suffix to ambiguous instructionsBill Wendling2-4/+4
Clang requires explicit size suffixes for potentially ambiguous instructions: x86/realmode.c:1647:2: error: ambiguous instructions require an explicit suffix (could be 'cmpb', 'cmpw', or 'cmpl') MK_INSN_PERF(perf_memory_load, "cmp $0, (%edi)"); ^ x86/realmode.c:1591:10: note: expanded from macro 'MK_INSN_PERF' "1:" insn "\n" \ ^ <inline asm>:8:3: note: instantiated into assembly here 1:cmp $0, (%edi) ^ The 'w' and 'l' suffixes generate code that's identical to the gcc version without them. Signed-off-by: Bill Wendling <morbo@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-28pci: use uint32_t for unsigned long valuesPaolo Bonzini1-2/+2
The "pci_bar_*" functions use 64-bit masks, but the results are assigned to 32-bit variables; clang complains. Use signed masks that can be sign-extended at will. Reported-by: Bill Wendling <morbo@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-28x86: emulator: use "SSE2" for the targetBill Wendling1-1/+1
The movdqu and movapd instructions are SSE2 instructions. Clang interprets the __attribute__((target("sse"))) as allowing SSE only instructions. Using SSE2 instructions cause an error. Signed-off-by: Bill Wendling <morbo@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-25x86: VMX: Add tests for monitor trap flagOliver Upton2-0/+158
Test to verify that MTF VM-exits into host are synthesized when the 'monitor trap flag' processor-based VM-execution control is set under various conditions. Expect an MTF VM-exit if instruction execution produces no events other than MTF. Should instruction execution produce a concurrent debug-trap and MTF event, expect an MTF VM-exit with the 'pending debug exceptions' VMCS field set. Expect an MTF VM-exit to follow event delivery should instruction execution generate a higher-priority event, such as a general-protection fault. Lastly, expect an MTF VM-exit to follow delivery of a debug-trap software exception (INT1/INT3/INTO/INT n). Signed-off-by: Oliver Upton <oupton@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-24vmx: tweak XFAILS for #DB testPaolo Bonzini1-2/+2
These were already fixed by KVM_CAP_EXCEPTION_PAYLOAD, but they were failing on old QEMUs that did not support it. The recent KVM patch "KVM: x86: Deliver exception payload on KVM_GET_VCPU_EVENTS" however fixed them even there, so it is about time to flip the arguments to check_db_exit and avoid ugly XPASS results with newer versions of QEMU. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-24x86: pmu: Test perfctr overflow after WRMSR on a running counterEric Hankland1-1/+18
Ensure that a WRMSR on a running counter will correctly update when the counter should overflow (similar to the existing overflow test case but with the counter being written to while it is running). Signed-off-by: Eric Hankland <ehankland@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-05Merge tag 's390x-2020-02-04' of https://gitlab.com/huth/kvm-unit-tests into HEADPaolo Bonzini7-66/+102
* s390x smp patches from Janosch * Updates for the gitlab and Travis CI
2020-02-05x86: provide enabled and disabled variation of the PCID testPaolo Bonzini1-1/+6
The PCID test checks for exceptions when PCID=0 or INVPCID=0 in CPUID. Cover that by adding a separate testcase with different CPUID. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-05x86: pmu: Test WRMSR on a running counterEric Hankland1-0/+16
Ensure that the value of the counter was successfully set to 0 after writing it while the counter was running. Signed-off-by: Eric Hankland <ehankland@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-05x86: Fix the name for the SMEP CPUID bitSean Christopherson2-3/+3
Fix the X86_FEATURE_* name for SMEP, which is incorrectly named X86_FEATURE_INVPCID_SINGLE and is a wee bit confusing when looking at the SMEP unit tests. Note, there is no INVPCID_SINGLE CPUID bit, the bogus name likely came from the Linux kernel, which has a synthetic feature flag for INVPCID_SINGLE in word 7, bit 7 (CPUID 0x7.EBX is stored in word 9). Fixes: 6ddcc29 ("kvm-unit-test: x86: Implement a generic wrapper for cpuid/cpuid_indexed functions") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-02-04travis.yml: Prevent 'script' from premature exitWainer dos Santos Moschetta1-2/+1
The 'script' section finishes its execution prematurely whenever a shell's exit is called. If the intention is to force Travis to flag a build/test failure then the correct approach is erroring any command statement. In this change, it combines the grep's in a single AND statement that in case of false Travis will interpret as a build error. Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com> Message-Id: <20200115144610.41655-1-wainersm@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-02-04gitlab-ci.yml: Remove ioapic from the x86 testsThomas Huth1-1/+1
The test recently started to fail (likely do to a recent change to "x86/ioapic.c). According to Nitesh, it's not required to keep this test running with TCG, and we already check it with KVM on Travis, so let's simply disable it here now. Message-Id: <20191205151610.19299-1-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>