aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJiri Slaby <jslaby@suse.cz>2018-09-07 11:13:07 +0200
committerBen Hutchings <ben.hutchings@codethink.co.uk>2018-09-12 20:19:25 +0100
commitb0786914501e7ada39731c4f698afc58ff287686 (patch)
tree7f4ef369d5e09916de31aea9e66309c86cfb5cf6
parent98b5ccfcee60feb85f007c21619dd4fa1e235eef (diff)
downloadlinux-cip-b0786914501e7ada39731c4f698afc58ff287686.tar.gz
x86/mm/pat: Fix L1TF stable backport for CPA, 2nd call
Mostly recycling the commit log from adaba23ccd7d which fixed populate_pmd, but did not fix populate_pud. The same problem exists there. Stable trees reverted the following patch: Revert "x86/mm/pat: Ensure cpa->pfn only contains page frame numbers" This reverts commit 87e2bd898d3a79a8c609f183180adac47879a2a4 which is commit edc3b9129cecd0f0857112136f5b8b1bc1d45918 upstream. but the L1TF patch 02ff2769edbc backported here x86/mm/pat: Make set_memory_np() L1TF safe commit 958f79b9ee55dfaf00c8106ed1c22a2919e0028b upstream set_memory_np() is used to mark kernel mappings not present, but it has it's own open coded mechanism which does not have the L1TF protection of inverting the address bits. assumed that cpa->pfn contains a PFN. With the above patch reverted it does not, which causes the PUD to be set to an incorrect address shifted by 12 bits, which can cause various failures. Convert the address to a PFN before passing it to pud_pfn(). This is a 4.4 stable only patch to fix the L1TF patches backport there. Cc: stable@vger.kernel.org # 4.4-only Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
-rw-r--r--arch/x86/mm/pageattr.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 1007fa80f5a6a1..0e1dd7d47f05ef 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1079,7 +1079,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
* Map everything starting from the Gb boundary, possibly with 1G pages
*/
while (end - start >= PUD_SIZE) {
- set_pud(pud, pud_mkhuge(pfn_pud(cpa->pfn,
+ set_pud(pud, pud_mkhuge(pfn_pud(cpa->pfn >> PAGE_SHIFT,
canon_pgprot(pud_pgprot))));
start += PUD_SIZE;