diff options
author | Ben Hutchings <ben@decadent.org.uk> | 2020-06-09 18:11:05 +0100 |
---|---|---|
committer | Ben Hutchings <ben@decadent.org.uk> | 2020-06-09 18:11:05 +0100 |
commit | 534b67c46042059501de7c44256f1c249ede56c4 (patch) | |
tree | 2b847592f72a0d0db8c921bec11e193718f153ca | |
parent | 582af5cb2973430c5a6b19f9d346a74977b9b7a5 (diff) | |
download | linux-stable-queue-534b67c46042059501de7c44256f1c249ede56c4.tar.gz |
Add patches related to SRBDS
8 files changed, 1558 insertions, 0 deletions
diff --git a/queue-3.16/random-always-use-batched-entropy-for-get_random_u-32-64.patch b/queue-3.16/random-always-use-batched-entropy-for-get_random_u-32-64.patch new file mode 100644 index 00000000..9a8a6187 --- /dev/null +++ b/queue-3.16/random-always-use-batched-entropy-for-get_random_u-32-64.patch @@ -0,0 +1,68 @@ +From: "Jason A. Donenfeld" <Jason@zx2c4.com> +Date: Fri, 21 Feb 2020 21:10:37 +0100 +Subject: random: always use batched entropy for get_random_u{32,64} + +commit 69efea712f5b0489e67d07565aad5c94e09a3e52 upstream. + +It turns out that RDRAND is pretty slow. Comparing these two +constructions: + + for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret)) + arch_get_random_long(&ret); + +and + + long buf[CHACHA_BLOCK_SIZE / sizeof(long)]; + extract_crng((u8 *)buf); + +it amortizes out to 352 cycles per long for the top one and 107 cycles +per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H. + +And importantly, the top one has the drawback of not benefiting from the +real rng, whereas the bottom one has all the nice benefits of using our +own chacha rng. As get_random_u{32,64} gets used in more places (perhaps +beyond what it was originally intended for when it was introduced as +get_random_{int,long} back in the md5 monstrosity era), it seems like it +might be a good thing to strengthen its posture a tiny bit. Doing this +should only be stronger and not any weaker because that pool is already +initialized with a bunch of rdrand data (when available). This way, we +get the benefits of the hardware rng as well as our own rng. + +Another benefit of this is that we no longer hit pitfalls of the recent +stream of AMD bugs in RDRAND. One often used code pattern for various +things is: + + do { + val = get_random_u32(); + } while (hash_table_contains_key(val)); + +That recent AMD bug rendered that pattern useless, whereas we're really +very certain that chacha20 output will give pretty distributed numbers, +no matter what. + +So, this simplification seems better both from a security perspective +and from a performance perspective. + +Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> +Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> +Link: https://lore.kernel.org/r/20200221201037.30231-1-Jason@zx2c4.com +Signed-off-by: Theodore Ts'o <tytso@mit.edu> +Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> +[bwh: Backported to 3.16: Only get_random_int() exists here] +Signed-off-by: Ben Hutchings <ben@decadent.org.uk> +--- + drivers/char/random.c | 6 ------ + 1 file changed, 6 deletions(-) + +--- a/drivers/char/random.c ++++ b/drivers/char/random.c +@@ -1700,9 +1700,6 @@ unsigned int get_random_int(void) + __u32 *hash; + unsigned int ret; + +- if (arch_get_random_int(&ret)) +- return ret; +- + hash = get_cpu_var(get_random_int_hash); + + hash[0] += current->pid + jiffies + random_get_entropy(); diff --git a/queue-3.16/series b/queue-3.16/series index 445426c6..6a88cc2c 100644 --- a/queue-3.16/series +++ b/queue-3.16/series @@ -51,3 +51,10 @@ ext4-unsigned-int-compared-against-zero.patch ext4-fix-block-validity-checks-for-journal-inodes-using-indirect-blocks.patch ext4-don-t-perform-block-validity-checks-on-the-journal-inode.patch ext4-add-cond_resched-to-ext4_protect_reserved_inode.patch +x86-cpu-rename-cpu_data.x86_mask-to-cpu_data.x86_stepping.patch +x86-cpu-add-a-steppings-field-to-struct-x86_cpu_id.patch +x86-cpu-add-table-argument-to-cpu_matches.patch +x86-speculation-add-special-register-buffer-data-sampling-srbds-mitigation.patch +x86-speculation-add-srbds-vulnerability-and-mitigation-documentation.patch +x86-speculation-add-ivy-bridge-to-affected-list.patch +random-always-use-batched-entropy-for-get_random_u-32-64.patch diff --git a/queue-3.16/x86-cpu-add-a-steppings-field-to-struct-x86_cpu_id.patch b/queue-3.16/x86-cpu-add-a-steppings-field-to-struct-x86_cpu_id.patch new file mode 100644 index 00000000..2c54fdc2 --- /dev/null +++ b/queue-3.16/x86-cpu-add-a-steppings-field-to-struct-x86_cpu_id.patch @@ -0,0 +1,115 @@ +From: Mark Gross <mgross@linux.intel.com> +Date: Tue, 28 Apr 2020 16:58:20 +0200 +Subject: x86/cpu: Add a steppings field to struct x86_cpu_id + +commit e9d7144597b10ff13ff2264c059f7d4a7fbc89ac upstream. + +Intel uses the same family/model for several CPUs. Sometimes the +stepping must be checked to tell them apart. + +On x86 there can be at most 16 steppings. Add a steppings bitmask to +x86_cpu_id and a X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro +and support for matching against family/model/stepping. + + [ bp: Massage. + tglx: Lightweight variant for backporting ] + +Signed-off-by: Mark Gross <mgross@linux.intel.com> +Signed-off-by: Borislav Petkov <bp@suse.de> +Signed-off-by: Thomas Gleixner <tglx@linutronix.de> +Reviewed-by: Tony Luck <tony.luck@intel.com> +Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> +Signed-off-by: Ben Hutchings <ben@decadent.org.uk> +--- + arch/x86/include/asm/cpu_device_id.h | 27 +++++++++++++++++++++++++++ + arch/x86/kernel/cpu/match.c | 7 ++++++- + include/linux/mod_devicetable.h | 6 ++++++ + 3 files changed, 39 insertions(+), 1 deletion(-) + +--- a/arch/x86/include/asm/cpu_device_id.h ++++ b/arch/x86/include/asm/cpu_device_id.h +@@ -8,6 +8,33 @@ + + #include <linux/mod_devicetable.h> + ++#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins) ++ ++/** ++ * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching ++ * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY ++ * The name is expanded to X86_VENDOR_@_vendor ++ * @_family: The family number or X86_FAMILY_ANY ++ * @_model: The model number, model constant or X86_MODEL_ANY ++ * @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_ANY ++ * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY ++ * @_data: Driver specific data or NULL. The internal storage ++ * format is unsigned long. The supplied value, pointer ++ * etc. is casted to unsigned long internally. ++ * ++ * Backport version to keep the SRBDS pile consistant. No shorter variants ++ * required for this. ++ */ ++#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \ ++ _steppings, _feature, _data) { \ ++ .vendor = X86_VENDOR_##_vendor, \ ++ .family = _family, \ ++ .model = _model, \ ++ .steppings = _steppings, \ ++ .feature = _feature, \ ++ .driver_data = (unsigned long) _data \ ++} ++ + extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match); + + #endif +--- a/arch/x86/kernel/cpu/match.c ++++ b/arch/x86/kernel/cpu/match.c +@@ -33,13 +33,18 @@ const struct x86_cpu_id *x86_match_cpu(c + const struct x86_cpu_id *m; + struct cpuinfo_x86 *c = &boot_cpu_data; + +- for (m = match; m->vendor | m->family | m->model | m->feature; m++) { ++ for (m = match; ++ m->vendor | m->family | m->model | m->steppings | m->feature; ++ m++) { + if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor) + continue; + if (m->family != X86_FAMILY_ANY && c->x86 != m->family) + continue; + if (m->model != X86_MODEL_ANY && c->x86_model != m->model) + continue; ++ if (m->steppings != X86_STEPPING_ANY && ++ !(BIT(c->x86_stepping) & m->steppings)) ++ continue; + if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature)) + continue; + return m; +--- a/include/linux/mod_devicetable.h ++++ b/include/linux/mod_devicetable.h +@@ -559,6 +559,10 @@ struct amba_id { + /* + * MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id. + * Although gcc seems to ignore this error, clang fails without this define. ++ * ++ * Note: The ordering of the struct is different from upstream because the ++ * static initializers in kernels < 5.7 still use C89 style while upstream ++ * has been converted to proper C99 initializers. + */ + #define x86cpu_device_id x86_cpu_id + struct x86_cpu_id { +@@ -567,6 +571,7 @@ struct x86_cpu_id { + __u16 model; + __u16 feature; /* bit index */ + kernel_ulong_t driver_data; ++ __u16 steppings; + }; + + #define X86_FEATURE_MATCH(x) \ +@@ -575,6 +580,7 @@ struct x86_cpu_id { + #define X86_VENDOR_ANY 0xffff + #define X86_FAMILY_ANY 0 + #define X86_MODEL_ANY 0 ++#define X86_STEPPING_ANY 0 + #define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */ + + /* diff --git a/queue-3.16/x86-cpu-add-table-argument-to-cpu_matches.patch b/queue-3.16/x86-cpu-add-table-argument-to-cpu_matches.patch new file mode 100644 index 00000000..3e02efdd --- /dev/null +++ b/queue-3.16/x86-cpu-add-table-argument-to-cpu_matches.patch @@ -0,0 +1,91 @@ +From: Mark Gross <mgross@linux.intel.com> +Date: Tue, 28 Apr 2020 16:58:20 +0200 +Subject: x86/cpu: Add 'table' argument to cpu_matches() + +commit 93920f61c2ad7edb01e63323832585796af75fc9 upstream. + +To make cpu_matches() reusable for other matching tables, have it take a +pointer to a x86_cpu_id table as an argument. + + [ bp: Flip arguments order. ] + +Signed-off-by: Mark Gross <mgross@linux.intel.com> +Signed-off-by: Borislav Petkov <bp@suse.de> +Signed-off-by: Thomas Gleixner <tglx@linutronix.de> +Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> +Signed-off-by: Ben Hutchings <ben@decadent.org.uk> +--- + arch/x86/kernel/cpu/common.c | 23 +++++++++++++---------- + 1 file changed, 13 insertions(+), 10 deletions(-) + +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -872,9 +872,9 @@ static const __initconst struct x86_cpu_ + {} + }; + +-static bool __init cpu_matches(unsigned long which) ++static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which) + { +- const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist); ++ const struct x86_cpu_id *m = x86_match_cpu(table); + + return m && !!(m->driver_data & which); + } +@@ -894,29 +894,32 @@ static void __init cpu_set_bug_bits(stru + u64 ia32_cap = x86_read_arch_cap_msr(); + + /* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */ +- if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO)) ++ if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) && ++ !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO)) + setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT); + +- if (cpu_matches(NO_SPECULATION)) ++ if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION)) + return; + + setup_force_cpu_bug(X86_BUG_SPECTRE_V1); + setup_force_cpu_bug(X86_BUG_SPECTRE_V2); + +- if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) && ++ if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) && ++ !(ia32_cap & ARCH_CAP_SSB_NO) && + !cpu_has(c, X86_FEATURE_AMD_SSB_NO)) + setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS); + + if (ia32_cap & ARCH_CAP_IBRS_ALL) + setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED); + +- if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) { ++ if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) && ++ !(ia32_cap & ARCH_CAP_MDS_NO)) { + setup_force_cpu_bug(X86_BUG_MDS); +- if (cpu_matches(MSBDS_ONLY)) ++ if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY)) + setup_force_cpu_bug(X86_BUG_MSBDS_ONLY); + } + +- if (!cpu_matches(NO_SWAPGS)) ++ if (!cpu_matches(cpu_vuln_whitelist, NO_SWAPGS)) + setup_force_cpu_bug(X86_BUG_SWAPGS); + + /* +@@ -934,7 +937,7 @@ static void __init cpu_set_bug_bits(stru + (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) + setup_force_cpu_bug(X86_BUG_TAA); + +- if (cpu_matches(NO_MELTDOWN)) ++ if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) + return; + + /* Rogue Data Cache Load? No! */ +@@ -943,7 +946,7 @@ static void __init cpu_set_bug_bits(stru + + setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); + +- if (cpu_matches(NO_L1TF)) ++ if (cpu_matches(cpu_vuln_whitelist, NO_L1TF)) + return; + + setup_force_cpu_bug(X86_BUG_L1TF); diff --git a/queue-3.16/x86-cpu-rename-cpu_data.x86_mask-to-cpu_data.x86_stepping.patch b/queue-3.16/x86-cpu-rename-cpu_data.x86_mask-to-cpu_data.x86_stepping.patch new file mode 100644 index 00000000..b9a91207 --- /dev/null +++ b/queue-3.16/x86-cpu-rename-cpu_data.x86_mask-to-cpu_data.x86_stepping.patch @@ -0,0 +1,697 @@ +From: Jia Zhang <qianyue.zj@alibaba-inc.com> +Date: Mon, 1 Jan 2018 09:52:10 +0800 +Subject: x86/cpu: Rename cpu_data.x86_mask to cpu_data.x86_stepping + +commit b399151cb48db30ad1e0e93dd40d68c6d007b637 upstream. + +x86_mask is a confusing name which is hard to associate with the +processor's stepping. + +Additionally, correct an indent issue in lib/cpu.c. + +Signed-off-by: Jia Zhang <qianyue.zj@alibaba-inc.com> +[ Updated it to more recent kernels. ] +Cc: Linus Torvalds <torvalds@linux-foundation.org> +Cc: Peter Zijlstra <peterz@infradead.org> +Cc: Thomas Gleixner <tglx@linutronix.de> +Cc: bp@alien8.de +Cc: tony.luck@intel.com +Link: http://lkml.kernel.org/r/1514771530-70829-1-git-send-email-qianyue.zj@alibaba-inc.com +Signed-off-by: Ingo Molnar <mingo@kernel.org> +Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> +[bwh: Backported to 3.16: + - Drop changes in arch/x86/lib/cpu.c + - Adjust filenames, context] +Signed-off-by: Ben Hutchings <ben@decadent.org.uk> +--- + arch/x86/include/asm/acpi.h | 2 +- + arch/x86/include/asm/processor.h | 2 +- + arch/x86/kernel/amd_nb.c | 2 +- + arch/x86/kernel/asm-offsets_32.c | 2 +- + arch/x86/kernel/cpu/amd.c | 28 +++++++++++----------- + arch/x86/kernel/cpu/centaur.c | 4 ++-- + arch/x86/kernel/cpu/common.c | 8 +++---- + arch/x86/kernel/cpu/cyrix.c | 2 +- + arch/x86/kernel/cpu/intel.c | 18 +++++++------- + arch/x86/kernel/cpu/microcode/intel.c | 4 ++-- + arch/x86/kernel/cpu/mtrr/generic.c | 2 +- + arch/x86/kernel/cpu/mtrr/main.c | 4 ++-- + arch/x86/kernel/cpu/perf_event_intel.c | 2 +- + arch/x86/kernel/cpu/perf_event_intel_lbr.c | 2 +- + arch/x86/kernel/cpu/perf_event_p6.c | 2 +- + arch/x86/kernel/cpu/proc.c | 4 ++-- + arch/x86/kernel/head_32.S | 4 ++-- + arch/x86/kernel/mpparse.c | 2 +- + drivers/char/hw_random/via-rng.c | 2 +- + drivers/cpufreq/acpi-cpufreq.c | 2 +- + drivers/cpufreq/longhaul.c | 6 ++--- + drivers/cpufreq/p4-clockmod.c | 2 +- + drivers/cpufreq/powernow-k7.c | 2 +- + drivers/cpufreq/speedstep-centrino.c | 4 ++-- + drivers/cpufreq/speedstep-lib.c | 6 ++--- + drivers/crypto/padlock-aes.c | 2 +- + drivers/edac/amd64_edac.c | 2 +- + drivers/edac/mce_amd.c | 2 +- + drivers/hwmon/coretemp.c | 6 ++--- + drivers/hwmon/hwmon-vid.c | 2 +- + drivers/hwmon/k10temp.c | 2 +- + drivers/hwmon/k8temp.c | 2 +- + drivers/video/fbdev/geode/video_gx.c | 2 +- + 33 files changed, 69 insertions(+), 69 deletions(-) + +--- a/arch/x86/include/asm/acpi.h ++++ b/arch/x86/include/asm/acpi.h +@@ -87,7 +87,7 @@ static inline unsigned int acpi_processo + if (boot_cpu_data.x86 == 0x0F && + boot_cpu_data.x86_vendor == X86_VENDOR_AMD && + boot_cpu_data.x86_model <= 0x05 && +- boot_cpu_data.x86_mask < 0x0A) ++ boot_cpu_data.x86_stepping < 0x0A) + return 1; + else if (amd_e400_c1e_detected) + return 1; +--- a/arch/x86/include/asm/processor.h ++++ b/arch/x86/include/asm/processor.h +@@ -82,7 +82,7 @@ struct cpuinfo_x86 { + __u8 x86; /* CPU family */ + __u8 x86_vendor; /* CPU vendor */ + __u8 x86_model; +- __u8 x86_mask; ++ __u8 x86_stepping; + #ifdef CONFIG_X86_32 + char wp_works_ok; /* It doesn't on 386's */ + +--- a/arch/x86/kernel/amd_nb.c ++++ b/arch/x86/kernel/amd_nb.c +@@ -116,7 +116,7 @@ int amd_cache_northbridges(void) + if (boot_cpu_data.x86 == 0x10 && + boot_cpu_data.x86_model >= 0x8 && + (boot_cpu_data.x86_model > 0x9 || +- boot_cpu_data.x86_mask >= 0x1)) ++ boot_cpu_data.x86_stepping >= 0x1)) + amd_northbridges.flags |= AMD_NB_L3_INDEX_DISABLE; + + if (boot_cpu_data.x86 == 0x15) +--- a/arch/x86/kernel/asm-offsets_32.c ++++ b/arch/x86/kernel/asm-offsets_32.c +@@ -27,7 +27,7 @@ void foo(void) + OFFSET(CPUINFO_x86, cpuinfo_x86, x86); + OFFSET(CPUINFO_x86_vendor, cpuinfo_x86, x86_vendor); + OFFSET(CPUINFO_x86_model, cpuinfo_x86, x86_model); +- OFFSET(CPUINFO_x86_mask, cpuinfo_x86, x86_mask); ++ OFFSET(CPUINFO_x86_stepping, cpuinfo_x86, x86_stepping); + OFFSET(CPUINFO_cpuid_level, cpuinfo_x86, cpuid_level); + OFFSET(CPUINFO_x86_capability, cpuinfo_x86, x86_capability); + OFFSET(CPUINFO_x86_vendor_id, cpuinfo_x86, x86_vendor_id); +--- a/arch/x86/kernel/cpu/amd.c ++++ b/arch/x86/kernel/cpu/amd.c +@@ -101,7 +101,7 @@ static void init_amd_k6(struct cpuinfo_x + return; + } + +- if (c->x86_model == 6 && c->x86_mask == 1) { ++ if (c->x86_model == 6 && c->x86_stepping == 1) { + const int K6_BUG_LOOP = 1000000; + int n; + void (*f_vide)(void); +@@ -131,7 +131,7 @@ static void init_amd_k6(struct cpuinfo_x + + /* K6 with old style WHCR */ + if (c->x86_model < 8 || +- (c->x86_model == 8 && c->x86_mask < 8)) { ++ (c->x86_model == 8 && c->x86_stepping < 8)) { + /* We can only write allocate on the low 508Mb */ + if (mbytes > 508) + mbytes = 508; +@@ -150,7 +150,7 @@ static void init_amd_k6(struct cpuinfo_x + return; + } + +- if ((c->x86_model == 8 && c->x86_mask > 7) || ++ if ((c->x86_model == 8 && c->x86_stepping > 7) || + c->x86_model == 9 || c->x86_model == 13) { + /* The more serious chips .. */ + +@@ -190,12 +190,12 @@ static void amd_k7_smp_check(struct cpui + * but they are not certified as MP capable. + */ + /* Athlon 660/661 is valid. */ +- if ((c->x86_model == 6) && ((c->x86_mask == 0) || +- (c->x86_mask == 1))) ++ if ((c->x86_model == 6) && ((c->x86_stepping == 0) || ++ (c->x86_stepping == 1))) + return; + + /* Duron 670 is valid */ +- if ((c->x86_model == 7) && (c->x86_mask == 0)) ++ if ((c->x86_model == 7) && (c->x86_stepping == 0)) + return; + + /* +@@ -205,8 +205,8 @@ static void amd_k7_smp_check(struct cpui + * See http://www.heise.de/newsticker/data/jow-18.10.01-000 for + * more. + */ +- if (((c->x86_model == 6) && (c->x86_mask >= 2)) || +- ((c->x86_model == 7) && (c->x86_mask >= 1)) || ++ if (((c->x86_model == 6) && (c->x86_stepping >= 2)) || ++ ((c->x86_model == 7) && (c->x86_stepping >= 1)) || + (c->x86_model > 7)) + if (cpu_has_mp) + return; +@@ -244,7 +244,7 @@ static void init_amd_k7(struct cpuinfo_x + * are more robust with CLK_CTL set to 200xxxxx instead of 600xxxxx + * As per AMD technical note 27212 0.2 + */ +- if ((c->x86_model == 8 && c->x86_mask >= 1) || (c->x86_model > 8)) { ++ if ((c->x86_model == 8 && c->x86_stepping >= 1) || (c->x86_model > 8)) { + rdmsr(MSR_K7_CLK_CTL, l, h); + if ((l & 0xfff00000) != 0x20000000) { + printk(KERN_INFO +@@ -514,7 +514,7 @@ static void early_init_amd(struct cpuinf + /* Set MTRR capability flag if appropriate */ + if (c->x86 == 5) + if (c->x86_model == 13 || c->x86_model == 9 || +- (c->x86_model == 8 && c->x86_mask >= 8)) ++ (c->x86_model == 8 && c->x86_stepping >= 8)) + set_cpu_cap(c, X86_FEATURE_K6_MTRR); + #endif + #if defined(CONFIG_X86_LOCAL_APIC) && defined(CONFIG_PCI) +@@ -561,7 +561,7 @@ static void init_amd_zn(struct cpuinfo_x + * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects + * all up to and including B1. + */ +- if (c->x86_model <= 1 && c->x86_mask <= 1) ++ if (c->x86_model <= 1 && c->x86_stepping <= 1) + set_cpu_cap(c, X86_FEATURE_CPB); + } + +@@ -810,11 +810,11 @@ static unsigned int amd_size_cache(struc + /* AMD errata T13 (order #21922) */ + if ((c->x86 == 6)) { + /* Duron Rev A0 */ +- if (c->x86_model == 3 && c->x86_mask == 0) ++ if (c->x86_model == 3 && c->x86_stepping == 0) + size = 64; + /* Tbird rev A1/A2 */ + if (c->x86_model == 4 && +- (c->x86_mask == 0 || c->x86_mask == 1)) ++ (c->x86_stepping == 0 || c->x86_stepping == 1)) + size = 256; + } + return size; +@@ -951,7 +951,7 @@ static bool cpu_has_amd_erratum(struct c + } + + /* OSVW unavailable or ID unknown, match family-model-stepping range */ +- ms = (cpu->x86_model << 4) | cpu->x86_mask; ++ ms = (cpu->x86_model << 4) | cpu->x86_stepping; + while ((range = *erratum++)) + if ((cpu->x86 == AMD_MODEL_RANGE_FAMILY(range)) && + (ms >= AMD_MODEL_RANGE_START(range)) && +--- a/arch/x86/kernel/cpu/centaur.c ++++ b/arch/x86/kernel/cpu/centaur.c +@@ -134,7 +134,7 @@ static void init_centaur(struct cpuinfo_ + clear_cpu_cap(c, X86_FEATURE_TSC); + break; + case 8: +- switch (c->x86_mask) { ++ switch (c->x86_stepping) { + default: + name = "2"; + break; +@@ -209,7 +209,7 @@ centaur_size_cache(struct cpuinfo_x86 *c + * - Note, it seems this may only be in engineering samples. + */ + if ((c->x86 == 6) && (c->x86_model == 9) && +- (c->x86_mask == 1) && (size == 65)) ++ (c->x86_stepping == 1) && (size == 65)) + size -= 1; + return size; + } +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -659,7 +659,7 @@ void cpu_detect(struct cpuinfo_x86 *c) + cpuid(0x00000001, &tfms, &misc, &junk, &cap0); + c->x86 = (tfms >> 8) & 0xf; + c->x86_model = (tfms >> 4) & 0xf; +- c->x86_mask = tfms & 0xf; ++ c->x86_stepping = tfms & 0xf; + + if (c->x86 == 0xf) + c->x86 += (tfms >> 20) & 0xff; +@@ -1095,7 +1095,7 @@ static void identify_cpu(struct cpuinfo_ + c->loops_per_jiffy = loops_per_jiffy; + c->x86_cache_size = 0; + c->x86_vendor = X86_VENDOR_UNKNOWN; +- c->x86_model = c->x86_mask = 0; /* So far unknown... */ ++ c->x86_model = c->x86_stepping = 0; /* So far unknown... */ + c->x86_vendor_id[0] = '\0'; /* Unset */ + c->x86_model_id[0] = '\0'; /* Unset */ + c->x86_max_cores = 1; +@@ -1356,8 +1356,8 @@ void print_cpu_info(struct cpuinfo_x86 * + + printk(KERN_CONT " (fam: %02x, model: %02x", c->x86, c->x86_model); + +- if (c->x86_mask || c->cpuid_level >= 0) +- printk(KERN_CONT ", stepping: %02x)\n", c->x86_mask); ++ if (c->x86_stepping || c->cpuid_level >= 0) ++ printk(KERN_CONT ", stepping: %02x)\n", c->x86_stepping); + else + printk(KERN_CONT ")\n"); + +--- a/arch/x86/kernel/cpu/cyrix.c ++++ b/arch/x86/kernel/cpu/cyrix.c +@@ -212,7 +212,7 @@ static void init_cyrix(struct cpuinfo_x8 + + /* common case step number/rev -- exceptions handled below */ + c->x86_model = (dir1 >> 4) + 1; +- c->x86_mask = dir1 & 0xf; ++ c->x86_stepping = dir1 & 0xf; + + /* Now cook; the original recipe is by Channing Corn, from Cyrix. + * We do the same thing for each generation: we work out +--- a/arch/x86/kernel/cpu/intel.c ++++ b/arch/x86/kernel/cpu/intel.c +@@ -81,7 +81,7 @@ static bool bad_spectre_microcode(struct + + for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) { + if (c->x86_model == spectre_bad_microcodes[i].model && +- c->x86_mask == spectre_bad_microcodes[i].stepping) ++ c->x86_stepping == spectre_bad_microcodes[i].stepping) + return (c->microcode <= spectre_bad_microcodes[i].microcode); + } + return false; +@@ -131,7 +131,7 @@ static void early_init_intel(struct cpui + * need the microcode to have already been loaded... so if it is + * not, recommend a BIOS update and disable large pages. + */ +- if (c->x86 == 6 && c->x86_model == 0x1c && c->x86_mask <= 2 && ++ if (c->x86 == 6 && c->x86_model == 0x1c && c->x86_stepping <= 2 && + c->microcode < 0x20e) { + printk(KERN_WARNING "Atom PSE erratum detected, BIOS microcode update recommended\n"); + clear_cpu_cap(c, X86_FEATURE_PSE); +@@ -147,7 +147,7 @@ static void early_init_intel(struct cpui + + /* CPUID workaround for 0F33/0F34 CPU */ + if (c->x86 == 0xF && c->x86_model == 0x3 +- && (c->x86_mask == 0x3 || c->x86_mask == 0x4)) ++ && (c->x86_stepping == 0x3 || c->x86_stepping == 0x4)) + c->x86_phys_bits = 36; + + /* +@@ -246,7 +246,7 @@ int ppro_with_ram_bug(void) + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && + boot_cpu_data.x86 == 6 && + boot_cpu_data.x86_model == 1 && +- boot_cpu_data.x86_mask < 8) { ++ boot_cpu_data.x86_stepping < 8) { + printk(KERN_INFO "Pentium Pro with Errata#50 detected. Taking evasive action.\n"); + return 1; + } +@@ -263,7 +263,7 @@ static void intel_smp_check(struct cpuin + * Mask B, Pentium, but not Pentium MMX + */ + if (c->x86 == 5 && +- c->x86_mask >= 1 && c->x86_mask <= 4 && ++ c->x86_stepping >= 1 && c->x86_stepping <= 4 && + c->x86_model <= 3) { + /* + * Remember we have B step Pentia with bugs +@@ -305,7 +305,7 @@ static void intel_workarounds(struct cpu + * SEP CPUID bug: Pentium Pro reports SEP but doesn't have it until + * model 3 mask 3 + */ +- if ((c->x86<<8 | c->x86_model<<4 | c->x86_mask) < 0x633) ++ if ((c->x86<<8 | c->x86_model<<4 | c->x86_stepping) < 0x633) + clear_cpu_cap(c, X86_FEATURE_SEP); + + /* +@@ -323,7 +323,7 @@ static void intel_workarounds(struct cpu + * P4 Xeon errata 037 workaround. + * Hardware prefetcher may cause stale data to be loaded into the cache. + */ +- if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_mask == 1)) { ++ if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_stepping == 1)) { + if (msr_set_bit(MSR_IA32_MISC_ENABLE, + MSR_IA32_MISC_ENABLE_PREFETCH_DISABLE_BIT) + > 0) { +@@ -339,7 +339,7 @@ static void intel_workarounds(struct cpu + * Specification Update"). + */ + if (cpu_has_apic && (c->x86<<8 | c->x86_model<<4) == 0x520 && +- (c->x86_mask < 0x6 || c->x86_mask == 0xb)) ++ (c->x86_stepping < 0x6 || c->x86_stepping == 0xb)) + set_cpu_cap(c, X86_FEATURE_11AP); + + +@@ -523,7 +523,7 @@ static void init_intel(struct cpuinfo_x8 + case 6: + if (l2 == 128) + p = "Celeron (Mendocino)"; +- else if (c->x86_mask == 0 || c->x86_mask == 5) ++ else if (c->x86_stepping == 0 || c->x86_stepping == 5) + p = "Celeron-A"; + break; + +--- a/arch/x86/kernel/cpu/microcode/intel.c ++++ b/arch/x86/kernel/cpu/microcode/intel.c +@@ -291,7 +291,7 @@ static bool is_blacklisted(unsigned int + */ + if (c->x86 == 6 && + c->x86_model == 0x4F && +- c->x86_mask == 0x01 && ++ c->x86_stepping == 0x01 && + llc_size_per_core > 2621440 && + c->microcode < 0x0b000021) { + pr_err_once("Erratum BDF90: late loading with revision < 0x0b000021 (0x%x) disabled.\n", c->microcode); +@@ -314,7 +314,7 @@ static enum ucode_state request_microcod + return UCODE_NFOUND; + + sprintf(name, "intel-ucode/%02x-%02x-%02x", +- c->x86, c->x86_model, c->x86_mask); ++ c->x86, c->x86_model, c->x86_stepping); + + if (request_firmware_direct(&firmware, name, device)) { + pr_debug("data file %s load failed\n", name); +--- a/arch/x86/kernel/cpu/mtrr/generic.c ++++ b/arch/x86/kernel/cpu/mtrr/generic.c +@@ -791,7 +791,7 @@ int generic_validate_add_page(unsigned l + */ + if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 && + boot_cpu_data.x86_model == 1 && +- boot_cpu_data.x86_mask <= 7) { ++ boot_cpu_data.x86_stepping <= 7) { + if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) { + pr_warning("mtrr: base(0x%lx000) is not 4 MiB aligned\n", base); + return -EINVAL; +--- a/arch/x86/kernel/cpu/mtrr/main.c ++++ b/arch/x86/kernel/cpu/mtrr/main.c +@@ -688,8 +688,8 @@ void __init mtrr_bp_init(void) + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && + boot_cpu_data.x86 == 0xF && + boot_cpu_data.x86_model == 0x3 && +- (boot_cpu_data.x86_mask == 0x3 || +- boot_cpu_data.x86_mask == 0x4)) ++ (boot_cpu_data.x86_stepping == 0x3 || ++ boot_cpu_data.x86_stepping == 0x4)) + phys_addr = 36; + + size_or_mask = SIZE_OR_MASK_BITS(phys_addr); +--- a/arch/x86/kernel/cpu/perf_event_intel.c ++++ b/arch/x86/kernel/cpu/perf_event_intel.c +@@ -2192,7 +2192,7 @@ static int intel_snb_pebs_broken(int cpu + break; + + case 45: /* SNB-EP */ +- switch (cpu_data(cpu).x86_mask) { ++ switch (cpu_data(cpu).x86_stepping) { + case 6: rev = 0x618; break; + case 7: rev = 0x70c; break; + } +--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c ++++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c +@@ -763,7 +763,7 @@ void intel_pmu_lbr_init_atom(void) + * on PMU interrupt + */ + if (boot_cpu_data.x86_model == 28 +- && boot_cpu_data.x86_mask < 10) { ++ && boot_cpu_data.x86_stepping < 10) { + pr_cont("LBR disabled due to erratum"); + return; + } +--- a/arch/x86/kernel/cpu/perf_event_p6.c ++++ b/arch/x86/kernel/cpu/perf_event_p6.c +@@ -233,7 +233,7 @@ static __initconst const struct x86_pmu + + static __init void p6_pmu_rdpmc_quirk(void) + { +- if (boot_cpu_data.x86_mask < 9) { ++ if (boot_cpu_data.x86_stepping < 9) { + /* + * PPro erratum 26; fixed in stepping 9 and above. + */ +--- a/arch/x86/kernel/cpu/proc.c ++++ b/arch/x86/kernel/cpu/proc.c +@@ -69,8 +69,8 @@ static int show_cpuinfo(struct seq_file + c->x86_model, + c->x86_model_id[0] ? c->x86_model_id : "unknown"); + +- if (c->x86_mask || c->cpuid_level >= 0) +- seq_printf(m, "stepping\t: %d\n", c->x86_mask); ++ if (c->x86_stepping || c->cpuid_level >= 0) ++ seq_printf(m, "stepping\t: %d\n", c->x86_stepping); + else + seq_printf(m, "stepping\t: unknown\n"); + if (c->microcode) +--- a/arch/x86/kernel/head_32.S ++++ b/arch/x86/kernel/head_32.S +@@ -33,7 +33,7 @@ + #define X86 new_cpu_data+CPUINFO_x86 + #define X86_VENDOR new_cpu_data+CPUINFO_x86_vendor + #define X86_MODEL new_cpu_data+CPUINFO_x86_model +-#define X86_MASK new_cpu_data+CPUINFO_x86_mask ++#define X86_STEPPING new_cpu_data+CPUINFO_x86_stepping + #define X86_HARD_MATH new_cpu_data+CPUINFO_hard_math + #define X86_CPUID new_cpu_data+CPUINFO_cpuid_level + #define X86_CAPABILITY new_cpu_data+CPUINFO_x86_capability +@@ -433,7 +433,7 @@ enable_paging: + shrb $4,%al + movb %al,X86_MODEL + andb $0x0f,%cl # mask mask revision +- movb %cl,X86_MASK ++ movb %cl,X86_STEPPING + movl %edx,X86_CAPABILITY + + is486: +--- a/arch/x86/kernel/mpparse.c ++++ b/arch/x86/kernel/mpparse.c +@@ -409,7 +409,7 @@ static inline void __init construct_defa + processor.apicver = mpc_default_type > 4 ? 0x10 : 0x01; + processor.cpuflag = CPU_ENABLED; + processor.cpufeature = (boot_cpu_data.x86 << 8) | +- (boot_cpu_data.x86_model << 4) | boot_cpu_data.x86_mask; ++ (boot_cpu_data.x86_model << 4) | boot_cpu_data.x86_stepping; + processor.featureflag = boot_cpu_data.x86_capability[0]; + processor.reserved[0] = 0; + processor.reserved[1] = 0; +--- a/drivers/char/hw_random/via-rng.c ++++ b/drivers/char/hw_random/via-rng.c +@@ -166,7 +166,7 @@ static int via_rng_init(struct hwrng *rn + /* Enable secondary noise source on CPUs where it is present. */ + + /* Nehemiah stepping 8 and higher */ +- if ((c->x86_model == 9) && (c->x86_mask > 7)) ++ if ((c->x86_model == 9) && (c->x86_stepping > 7)) + lo |= VIA_NOISESRC2; + + /* Esther */ +--- a/drivers/cpufreq/acpi-cpufreq.c ++++ b/drivers/cpufreq/acpi-cpufreq.c +@@ -628,7 +628,7 @@ static int acpi_cpufreq_blacklist(struct + if (c->x86_vendor == X86_VENDOR_INTEL) { + if ((c->x86 == 15) && + (c->x86_model == 6) && +- (c->x86_mask == 8)) { ++ (c->x86_stepping == 8)) { + printk(KERN_INFO "acpi-cpufreq: Intel(R) " + "Xeon(R) 7100 Errata AL30, processors may " + "lock up on frequency changes: disabling " +--- a/drivers/cpufreq/longhaul.c ++++ b/drivers/cpufreq/longhaul.c +@@ -786,7 +786,7 @@ static int longhaul_cpu_init(struct cpuf + break; + + case 7: +- switch (c->x86_mask) { ++ switch (c->x86_stepping) { + case 0: + longhaul_version = TYPE_LONGHAUL_V1; + cpu_model = CPU_SAMUEL2; +@@ -798,7 +798,7 @@ static int longhaul_cpu_init(struct cpuf + break; + case 1 ... 15: + longhaul_version = TYPE_LONGHAUL_V2; +- if (c->x86_mask < 8) { ++ if (c->x86_stepping < 8) { + cpu_model = CPU_SAMUEL2; + cpuname = "C3 'Samuel 2' [C5B]"; + } else { +@@ -825,7 +825,7 @@ static int longhaul_cpu_init(struct cpuf + numscales = 32; + memcpy(mults, nehemiah_mults, sizeof(nehemiah_mults)); + memcpy(eblcr, nehemiah_eblcr, sizeof(nehemiah_eblcr)); +- switch (c->x86_mask) { ++ switch (c->x86_stepping) { + case 0 ... 1: + cpu_model = CPU_NEHEMIAH; + cpuname = "C3 'Nehemiah A' [C5XLOE]"; +--- a/drivers/cpufreq/p4-clockmod.c ++++ b/drivers/cpufreq/p4-clockmod.c +@@ -176,7 +176,7 @@ static int cpufreq_p4_cpu_init(struct cp + #endif + + /* Errata workaround */ +- cpuid = (c->x86 << 8) | (c->x86_model << 4) | c->x86_mask; ++ cpuid = (c->x86 << 8) | (c->x86_model << 4) | c->x86_stepping; + switch (cpuid) { + case 0x0f07: + case 0x0f0a: +--- a/drivers/cpufreq/powernow-k7.c ++++ b/drivers/cpufreq/powernow-k7.c +@@ -133,7 +133,7 @@ static int check_powernow(void) + return 0; + } + +- if ((c->x86_model == 6) && (c->x86_mask == 0)) { ++ if ((c->x86_model == 6) && (c->x86_stepping == 0)) { + printk(KERN_INFO PFX "K7 660[A0] core detected, " + "enabling errata workarounds\n"); + have_a0 = 1; +--- a/drivers/cpufreq/speedstep-centrino.c ++++ b/drivers/cpufreq/speedstep-centrino.c +@@ -36,7 +36,7 @@ struct cpu_id + { + __u8 x86; /* CPU family */ + __u8 x86_model; /* model */ +- __u8 x86_mask; /* stepping */ ++ __u8 x86_stepping; /* stepping */ + }; + + enum { +@@ -276,7 +276,7 @@ static int centrino_verify_cpu_id(const + { + if ((c->x86 == x->x86) && + (c->x86_model == x->x86_model) && +- (c->x86_mask == x->x86_mask)) ++ (c->x86_stepping == x->x86_stepping)) + return 1; + return 0; + } +--- a/drivers/cpufreq/speedstep-lib.c ++++ b/drivers/cpufreq/speedstep-lib.c +@@ -270,9 +270,9 @@ unsigned int speedstep_detect_processor( + ebx = cpuid_ebx(0x00000001); + ebx &= 0x000000FF; + +- pr_debug("ebx value is %x, x86_mask is %x\n", ebx, c->x86_mask); ++ pr_debug("ebx value is %x, x86_stepping is %x\n", ebx, c->x86_stepping); + +- switch (c->x86_mask) { ++ switch (c->x86_stepping) { + case 4: + /* + * B-stepping [M-P4-M] +@@ -359,7 +359,7 @@ unsigned int speedstep_detect_processor( + msr_lo, msr_hi); + if ((msr_hi & (1<<18)) && + (relaxed_check ? 1 : (msr_hi & (3<<24)))) { +- if (c->x86_mask == 0x01) { ++ if (c->x86_stepping == 0x01) { + pr_debug("early PIII version\n"); + return SPEEDSTEP_CPU_PIII_C_EARLY; + } else +--- a/drivers/crypto/padlock-aes.c ++++ b/drivers/crypto/padlock-aes.c +@@ -535,7 +535,7 @@ static int __init padlock_init(void) + + printk(KERN_NOTICE PFX "Using VIA PadLock ACE for AES algorithm.\n"); + +- if (c->x86 == 6 && c->x86_model == 15 && c->x86_mask == 2) { ++ if (c->x86 == 6 && c->x86_model == 15 && c->x86_stepping == 2) { + ecb_fetch_blocks = MAX_ECB_FETCH_BLOCKS; + cbc_fetch_blocks = MAX_CBC_FETCH_BLOCKS; + printk(KERN_NOTICE PFX "VIA Nano stepping 2 detected: enabling workaround.\n"); +--- a/drivers/edac/amd64_edac.c ++++ b/drivers/edac/amd64_edac.c +@@ -2576,7 +2576,7 @@ static struct amd64_family_type *per_fam + struct amd64_family_type *fam_type = NULL; + + pvt->ext_model = boot_cpu_data.x86_model >> 4; +- pvt->stepping = boot_cpu_data.x86_mask; ++ pvt->stepping = boot_cpu_data.x86_stepping; + pvt->model = boot_cpu_data.x86_model; + pvt->fam = boot_cpu_data.x86; + +--- a/drivers/edac/mce_amd.c ++++ b/drivers/edac/mce_amd.c +@@ -744,7 +744,7 @@ int amd_decode_mce(struct notifier_block + + pr_emerg(HW_ERR "CPU:%d (%x:%x:%x) MC%d_STATUS[%s|%s|%s|%s|%s", + m->extcpu, +- c->x86, c->x86_model, c->x86_mask, ++ c->x86, c->x86_model, c->x86_stepping, + m->bank, + ((m->status & MCI_STATUS_OVER) ? "Over" : "-"), + ((m->status & MCI_STATUS_UC) ? "UE" : "CE"), +--- a/drivers/hwmon/coretemp.c ++++ b/drivers/hwmon/coretemp.c +@@ -268,13 +268,13 @@ static int adjust_tjmax(struct cpuinfo_x + for (i = 0; i < ARRAY_SIZE(tjmax_model_table); i++) { + const struct tjmax_model *tm = &tjmax_model_table[i]; + if (c->x86_model == tm->model && +- (tm->mask == ANY || c->x86_mask == tm->mask)) ++ (tm->mask == ANY || c->x86_stepping == tm->mask)) + return tm->tjmax; + } + + /* Early chips have no MSR for TjMax */ + +- if (c->x86_model == 0xf && c->x86_mask < 4) ++ if (c->x86_model == 0xf && c->x86_stepping < 4) + usemsr_ee = 0; + + if (c->x86_model > 0xe && usemsr_ee) { +@@ -426,7 +426,7 @@ static int chk_ucode_version(unsigned in + * Readings might stop update when processor visited too deep sleep, + * fixed for stepping D0 (6EC). + */ +- if (c->x86_model == 0xe && c->x86_mask < 0xc && c->microcode < 0x39) { ++ if (c->x86_model == 0xe && c->x86_stepping < 0xc && c->microcode < 0x39) { + pr_err("Errata AE18 not fixed, update BIOS or microcode of the CPU!\n"); + return -ENODEV; + } +--- a/drivers/hwmon/hwmon-vid.c ++++ b/drivers/hwmon/hwmon-vid.c +@@ -293,7 +293,7 @@ u8 vid_which_vrm(void) + if (c->x86 < 6) /* Any CPU with family lower than 6 */ + return 0; /* doesn't have VID */ + +- vrm_ret = find_vrm(c->x86, c->x86_model, c->x86_mask, c->x86_vendor); ++ vrm_ret = find_vrm(c->x86, c->x86_model, c->x86_stepping, c->x86_vendor); + if (vrm_ret == 134) + vrm_ret = get_via_model_d_vrm(); + if (vrm_ret == 0) +--- a/drivers/hwmon/k10temp.c ++++ b/drivers/hwmon/k10temp.c +@@ -126,7 +126,7 @@ static bool has_erratum_319(struct pci_d + * and AM3 formats, but that's the best we can do. + */ + return boot_cpu_data.x86_model < 4 || +- (boot_cpu_data.x86_model == 4 && boot_cpu_data.x86_mask <= 2); ++ (boot_cpu_data.x86_model == 4 && boot_cpu_data.x86_stepping <= 2); + } + + static int k10temp_probe(struct pci_dev *pdev, +--- a/drivers/hwmon/k8temp.c ++++ b/drivers/hwmon/k8temp.c +@@ -187,7 +187,7 @@ static int k8temp_probe(struct pci_dev * + return -ENOMEM; + + model = boot_cpu_data.x86_model; +- stepping = boot_cpu_data.x86_mask; ++ stepping = boot_cpu_data.x86_stepping; + + /* feature available since SH-C0, exclude older revisions */ + if ((model == 4 && stepping == 0) || +--- a/drivers/video/fbdev/geode/video_gx.c ++++ b/drivers/video/fbdev/geode/video_gx.c +@@ -127,7 +127,7 @@ void gx_set_dclk_frequency(struct fb_inf + int timeout = 1000; + + /* Rev. 1 Geode GXs use a 14 MHz reference clock instead of 48 MHz. */ +- if (cpu_data(0).x86_mask == 1) { ++ if (cpu_data(0).x86_stepping == 1) { + pll_table = gx_pll_table_14MHz; + pll_table_len = ARRAY_SIZE(gx_pll_table_14MHz); + } else { diff --git a/queue-3.16/x86-speculation-add-ivy-bridge-to-affected-list.patch b/queue-3.16/x86-speculation-add-ivy-bridge-to-affected-list.patch new file mode 100644 index 00000000..8c9e00e9 --- /dev/null +++ b/queue-3.16/x86-speculation-add-ivy-bridge-to-affected-list.patch @@ -0,0 +1,38 @@ +From: Josh Poimboeuf <jpoimboe@redhat.com> +Date: Mon, 27 Apr 2020 20:46:13 +0200 +Subject: x86/speculation: Add Ivy Bridge to affected list + +commit 3798cc4d106e91382bfe016caa2edada27c2bb3f upstream. + +Make the docs match the code. + +Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> +Signed-off-by: Thomas Gleixner <tglx@linutronix.de> +Signed-off-by: Ben Hutchings <ben@decadent.org.uk> +--- + .../hw-vuln/special-register-buffer-data-sampling.rst | 7 ++++--- + 1 file changed, 4 insertions(+), 3 deletions(-) + +--- a/Documentation/hw-vuln/special-register-buffer-data-sampling.rst ++++ b/Documentation/hw-vuln/special-register-buffer-data-sampling.rst +@@ -27,6 +27,8 @@ by software using TSX_CTRL_MSR otherwise + ============= ============ ======== + common name Family_Model Stepping + ============= ============ ======== ++ IvyBridge 06_3AH All ++ + Haswell 06_3CH All + Haswell_L 06_45H All + Haswell_G 06_46H All +@@ -37,9 +39,8 @@ by software using TSX_CTRL_MSR otherwise + Skylake_L 06_4EH All + Skylake 06_5EH All + +- Kabylake_L 06_8EH <=0xC +- +- Kabylake 06_9EH <=0xD ++ Kabylake_L 06_8EH <= 0xC ++ Kabylake 06_9EH <= 0xD + ============= ============ ======== + + Related CVEs diff --git a/queue-3.16/x86-speculation-add-special-register-buffer-data-sampling-srbds-mitigation.patch b/queue-3.16/x86-speculation-add-special-register-buffer-data-sampling-srbds-mitigation.patch new file mode 100644 index 00000000..127839d7 --- /dev/null +++ b/queue-3.16/x86-speculation-add-special-register-buffer-data-sampling-srbds-mitigation.patch @@ -0,0 +1,370 @@ +From: Mark Gross <mgross@linux.intel.com> +Date: Tue, 28 Apr 2020 16:58:20 +0200 +Subject: x86/speculation: Add Special Register Buffer Data Sampling (SRBDS) mitigation + +commit 7e5b3c267d256822407a22fdce6afdf9cd13f9fb upstream. + +SRBDS is an MDS-like speculative side channel that can leak bits from the +random number generator (RNG) across cores and threads. New microcode +serializes the processor access during the execution of RDRAND and +RDSEED. This ensures that the shared buffer is overwritten before it is +released for reuse. + +While it is present on all affected CPU models, the microcode mitigation +is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the +cases where TSX is not supported or has been disabled with TSX_CTRL. + +The mitigation is activated by default on affected processors and it +increases latency for RDRAND and RDSEED instructions. Among other +effects this will reduce throughput from /dev/urandom. + +* Enable administrator to configure the mitigation off when desired using + either mitigations=off or srbds=off. + +* Export vulnerability status via sysfs + +* Rename file-scoped macros to apply for non-whitelist table initializations. + + [ bp: Massage, + - s/VULNBL_INTEL_STEPPING/VULNBL_INTEL_STEPPINGS/g, + - do not read arch cap MSR a second time in tsx_fused_off() - just pass it in, + - flip check in cpu_set_bug_bits() to save an indentation level, + - reflow comments. + jpoimboe: s/Mitigated/Mitigation/ in user-visible strings + tglx: Dropped the fused off magic for now + ] + +Signed-off-by: Mark Gross <mgross@linux.intel.com> +Signed-off-by: Borislav Petkov <bp@suse.de> +Signed-off-by: Thomas Gleixner <tglx@linutronix.de> +Reviewed-by: Tony Luck <tony.luck@intel.com> +Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> +Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> +Tested-by: Neelima Krishnan <neelima.krishnan@intel.com> +[bwh: Backported to 3.16: + - CPU feature words and bugs are numbered differently + - Adjust filename for <asm/msr-index.h>] +Signed-off-by: Ben Hutchings <ben@decadent.org.uk> +--- + .../ABI/testing/sysfs-devices-system-cpu | 1 + + Documentation/kernel-parameters.txt | 20 ++++ + arch/x86/include/asm/cpufeatures.h | 2 + + arch/x86/include/uapi/asm/msr-index.h | 4 + + arch/x86/kernel/cpu/bugs.c | 106 ++++++++++++++++++ + arch/x86/kernel/cpu/common.c | 31 +++++ + arch/x86/kernel/cpu/cpu.h | 1 + + drivers/base/cpu.c | 8 ++ + 8 files changed, 173 insertions(+) + +--- a/Documentation/ABI/testing/sysfs-devices-system-cpu ++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu +@@ -232,6 +232,7 @@ What: /sys/devices/system/cpu/vulnerabi + /sys/devices/system/cpu/vulnerabilities/spec_store_bypass + /sys/devices/system/cpu/vulnerabilities/l1tf + /sys/devices/system/cpu/vulnerabilities/mds ++ /sys/devices/system/cpu/vulnerabilities/srbds + /sys/devices/system/cpu/vulnerabilities/tsx_async_abort + /sys/devices/system/cpu/vulnerabilities/itlb_multihit + Date: January 2018 +--- a/Documentation/kernel-parameters.txt ++++ b/Documentation/kernel-parameters.txt +@@ -3356,6 +3356,26 @@ bytes respectively. Such letter suffixes + spia_pedr= + spia_peddr= + ++ srbds= [X86,INTEL] ++ Control the Special Register Buffer Data Sampling ++ (SRBDS) mitigation. ++ ++ Certain CPUs are vulnerable to an MDS-like ++ exploit which can leak bits from the random ++ number generator. ++ ++ By default, this issue is mitigated by ++ microcode. However, the microcode fix can cause ++ the RDRAND and RDSEED instructions to become ++ much slower. Among other effects, this will ++ result in reduced throughput from /dev/urandom. ++ ++ The microcode mitigation can be disabled with ++ the following option: ++ ++ off: Disable mitigation and remove ++ performance impact to RDRAND and RDSEED ++ + stack_guard_gap= [MM] + override the default stack gap protection. The value + is in page units and it defines how many pages prior +--- a/arch/x86/include/asm/cpufeatures.h ++++ b/arch/x86/include/asm/cpufeatures.h +@@ -247,6 +247,7 @@ + #define X86_FEATURE_AVX512CD ( 9*32+28) /* AVX-512 Conflict Detection */ + + /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 10 */ ++#define X86_FEATURE_SRBDS_CTRL (10*32+ 9) /* "" SRBDS mitigation MSR available */ + #define X86_FEATURE_MD_CLEAR (10*32+10) /* VERW clears CPU buffers */ + #define X86_FEATURE_SPEC_CTRL (10*32+26) /* "" Speculation Control (IBRS + IBPB) */ + #define X86_FEATURE_INTEL_STIBP (10*32+27) /* "" Single Thread Indirect Branch Predictors */ +@@ -281,5 +282,6 @@ + #define X86_BUG_SWAPGS X86_BUG(12) /* CPU is affected by speculation through SWAPGS */ + #define X86_BUG_TAA X86_BUG(13) /* CPU is affected by TSX Async Abort(TAA) */ + #define X86_BUG_ITLB_MULTIHIT X86_BUG(14) /* CPU may incur MCE during certain page attribute changes */ ++#define X86_BUG_SRBDS X86_BUG(15) /* CPU may leak RNG bits if not mitigated */ + + #endif /* _ASM_X86_CPUFEATURES_H */ +--- a/arch/x86/include/uapi/asm/msr-index.h ++++ b/arch/x86/include/uapi/asm/msr-index.h +@@ -90,6 +90,10 @@ + #define TSX_CTRL_RTM_DISABLE (1UL << 0) /* Disable RTM feature */ + #define TSX_CTRL_CPUID_CLEAR (1UL << 1) /* Disable TSX enumeration */ + ++/* SRBDS support */ ++#define MSR_IA32_MCU_OPT_CTRL 0x00000123 ++#define RNGDS_MITG_DIS BIT(0) ++ + #define MSR_IA32_SYSENTER_CS 0x00000174 + #define MSR_IA32_SYSENTER_ESP 0x00000175 + #define MSR_IA32_SYSENTER_EIP 0x00000176 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -38,6 +38,7 @@ static void __init ssb_select_mitigation + static void __init l1tf_select_mitigation(void); + static void __init mds_select_mitigation(void); + static void __init taa_select_mitigation(void); ++static void __init srbds_select_mitigation(void); + + /* The base value of the SPEC_CTRL MSR that always has to be preserved. */ + u64 x86_spec_ctrl_base; +@@ -159,6 +160,7 @@ void __init check_bugs(void) + l1tf_select_mitigation(); + mds_select_mitigation(); + taa_select_mitigation(); ++ srbds_select_mitigation(); + + arch_smt_update(); + +@@ -417,6 +419,97 @@ static int __init tsx_async_abort_parse_ + early_param("tsx_async_abort", tsx_async_abort_parse_cmdline); + + #undef pr_fmt ++#define pr_fmt(fmt) "SRBDS: " fmt ++ ++enum srbds_mitigations { ++ SRBDS_MITIGATION_OFF, ++ SRBDS_MITIGATION_UCODE_NEEDED, ++ SRBDS_MITIGATION_FULL, ++ SRBDS_MITIGATION_TSX_OFF, ++ SRBDS_MITIGATION_HYPERVISOR, ++}; ++ ++static enum srbds_mitigations srbds_mitigation = SRBDS_MITIGATION_FULL; ++ ++static const char * const srbds_strings[] = { ++ [SRBDS_MITIGATION_OFF] = "Vulnerable", ++ [SRBDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode", ++ [SRBDS_MITIGATION_FULL] = "Mitigation: Microcode", ++ [SRBDS_MITIGATION_TSX_OFF] = "Mitigation: TSX disabled", ++ [SRBDS_MITIGATION_HYPERVISOR] = "Unknown: Dependent on hypervisor status", ++}; ++ ++static bool srbds_off; ++ ++void update_srbds_msr(void) ++{ ++ u64 mcu_ctrl; ++ ++ if (!boot_cpu_has_bug(X86_BUG_SRBDS)) ++ return; ++ ++ if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) ++ return; ++ ++ if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED) ++ return; ++ ++ rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl); ++ ++ switch (srbds_mitigation) { ++ case SRBDS_MITIGATION_OFF: ++ case SRBDS_MITIGATION_TSX_OFF: ++ mcu_ctrl |= RNGDS_MITG_DIS; ++ break; ++ case SRBDS_MITIGATION_FULL: ++ mcu_ctrl &= ~RNGDS_MITG_DIS; ++ break; ++ default: ++ break; ++ } ++ ++ wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl); ++} ++ ++static void __init srbds_select_mitigation(void) ++{ ++ u64 ia32_cap; ++ ++ if (!boot_cpu_has_bug(X86_BUG_SRBDS)) ++ return; ++ ++ /* ++ * Check to see if this is one of the MDS_NO systems supporting ++ * TSX that are only exposed to SRBDS when TSX is enabled. ++ */ ++ ia32_cap = x86_read_arch_cap_msr(); ++ if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) ++ srbds_mitigation = SRBDS_MITIGATION_TSX_OFF; ++ else if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) ++ srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR; ++ else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) ++ srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED; ++ else if (cpu_mitigations_off() || srbds_off) ++ srbds_mitigation = SRBDS_MITIGATION_OFF; ++ ++ update_srbds_msr(); ++ pr_info("%s\n", srbds_strings[srbds_mitigation]); ++} ++ ++static int __init srbds_parse_cmdline(char *str) ++{ ++ if (!str) ++ return -EINVAL; ++ ++ if (!boot_cpu_has_bug(X86_BUG_SRBDS)) ++ return 0; ++ ++ srbds_off = !strcmp(str, "off"); ++ return 0; ++} ++early_param("srbds", srbds_parse_cmdline); ++ ++#undef pr_fmt + #define pr_fmt(fmt) "Spectre V1 : " fmt + + enum spectre_v1_mitigation { +@@ -1422,6 +1515,11 @@ static char *ibpb_state(void) + return ""; + } + ++static ssize_t srbds_show_state(char *buf) ++{ ++ return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]); ++} ++ + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, + char *buf, unsigned int bug) + { +@@ -1463,6 +1561,9 @@ static ssize_t cpu_show_common(struct de + case X86_BUG_ITLB_MULTIHIT: + return itlb_multihit_show_state(buf); + ++ case X86_BUG_SRBDS: ++ return srbds_show_state(buf); ++ + default: + break; + } +@@ -1509,4 +1610,9 @@ ssize_t cpu_show_itlb_multihit(struct de + { + return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT); + } ++ ++ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf) ++{ ++ return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS); ++} + #endif +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -872,6 +872,27 @@ static const __initconst struct x86_cpu_ + {} + }; + ++#define VULNBL_INTEL_STEPPINGS(model, steppings, issues) \ ++ X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, \ ++ INTEL_FAM6_##model, steppings, \ ++ X86_FEATURE_ANY, issues) ++ ++#define SRBDS BIT(0) ++ ++static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { ++ VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), ++ VULNBL_INTEL_STEPPINGS(HASWELL_CORE, X86_STEPPING_ANY, SRBDS), ++ VULNBL_INTEL_STEPPINGS(HASWELL_ULT, X86_STEPPING_ANY, SRBDS), ++ VULNBL_INTEL_STEPPINGS(HASWELL_GT3E, X86_STEPPING_ANY, SRBDS), ++ VULNBL_INTEL_STEPPINGS(BROADWELL_GT3E, X86_STEPPING_ANY, SRBDS), ++ VULNBL_INTEL_STEPPINGS(BROADWELL_CORE, X86_STEPPING_ANY, SRBDS), ++ VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS), ++ VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPING_ANY, SRBDS), ++ VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x0, 0xC), SRBDS), ++ VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x0, 0xD), SRBDS), ++ {} ++}; ++ + static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which) + { + const struct x86_cpu_id *m = x86_match_cpu(table); +@@ -937,6 +958,15 @@ static void __init cpu_set_bug_bits(stru + (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) + setup_force_cpu_bug(X86_BUG_TAA); + ++ /* ++ * SRBDS affects CPUs which support RDRAND or RDSEED and are listed ++ * in the vulnerability blacklist. ++ */ ++ if ((cpu_has(c, X86_FEATURE_RDRAND) || ++ cpu_has(c, X86_FEATURE_RDSEED)) && ++ cpu_matches(cpu_vuln_blacklist, SRBDS)) ++ setup_force_cpu_bug(X86_BUG_SRBDS); ++ + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) + return; + +@@ -1283,6 +1313,7 @@ void identify_secondary_cpu(struct cpuin + #endif + mtrr_ap_init(); + x86_spec_ctrl_setup_ap(); ++ update_srbds_msr(); + } + + struct msr_range { +--- a/arch/x86/kernel/cpu/cpu.h ++++ b/arch/x86/kernel/cpu/cpu.h +@@ -63,6 +63,7 @@ extern void get_cpu_cap(struct cpuinfo_x + extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c); + + extern void x86_spec_ctrl_setup_ap(void); ++extern void update_srbds_msr(void); + + extern u64 x86_read_arch_cap_msr(void); + +--- a/drivers/base/cpu.c ++++ b/drivers/base/cpu.c +@@ -469,6 +469,12 @@ ssize_t __weak cpu_show_itlb_multihit(st + return sprintf(buf, "Not affected\n"); + } + ++ssize_t __weak cpu_show_srbds(struct device *dev, ++ struct device_attribute *attr, char *buf) ++{ ++ return sprintf(buf, "Not affected\n"); ++} ++ + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); + static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); + static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); +@@ -477,6 +483,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_ + static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL); + static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL); + static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL); ++static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL); + + static struct attribute *cpu_root_vulnerabilities_attrs[] = { + &dev_attr_meltdown.attr, +@@ -487,6 +494,7 @@ static struct attribute *cpu_root_vulner + &dev_attr_mds.attr, + &dev_attr_tsx_async_abort.attr, + &dev_attr_itlb_multihit.attr, ++ &dev_attr_srbds.attr, + NULL + }; + diff --git a/queue-3.16/x86-speculation-add-srbds-vulnerability-and-mitigation-documentation.patch b/queue-3.16/x86-speculation-add-srbds-vulnerability-and-mitigation-documentation.patch new file mode 100644 index 00000000..fe300715 --- /dev/null +++ b/queue-3.16/x86-speculation-add-srbds-vulnerability-and-mitigation-documentation.patch @@ -0,0 +1,172 @@ +From: Mark Gross <mgross@linux.intel.com> +Date: Tue, 28 Apr 2020 16:58:21 +0200 +Subject: x86/speculation: Add SRBDS vulnerability and mitigation documentation + +commit 7222a1b5b87417f22265c92deea76a6aecd0fb0f upstream. + +Add documentation for the SRBDS vulnerability and its mitigation. + + [ bp: Massage. + jpoimboe: sysfs table strings. ] + +Signed-off-by: Mark Gross <mgross@linux.intel.com> +Signed-off-by: Borislav Petkov <bp@suse.de> +Reviewed-by: Tony Luck <tony.luck@intel.com> +Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> +Signed-off-by: Ben Hutchings <ben@decadent.org.uk> +--- + .../special-register-buffer-data-sampling.rst | 148 ++++++++++++++++++ + 1 file changed, 148 insertions(+) + create mode 100644 Documentation/hw-vuln/special-register-buffer-data-sampling.rst + +--- /dev/null ++++ b/Documentation/hw-vuln/special-register-buffer-data-sampling.rst +@@ -0,0 +1,148 @@ ++.. SPDX-License-Identifier: GPL-2.0 ++ ++SRBDS - Special Register Buffer Data Sampling ++============================================= ++ ++SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to ++infer values returned from special register accesses. Special register ++accesses are accesses to off core registers. According to Intel's evaluation, ++the special register reads that have a security expectation of privacy are ++RDRAND, RDSEED and SGX EGETKEY. ++ ++When RDRAND, RDSEED and EGETKEY instructions are used, the data is moved ++to the core through the special register mechanism that is susceptible ++to MDS attacks. ++ ++Affected processors ++-------------------- ++Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may ++be affected. ++ ++A processor is affected by SRBDS if its Family_Model and stepping is ++in the following list, with the exception of the listed processors ++exporting MDS_NO while Intel TSX is available yet not enabled. The ++latter class of processors are only affected when Intel TSX is enabled ++by software using TSX_CTRL_MSR otherwise they are not affected. ++ ++ ============= ============ ======== ++ common name Family_Model Stepping ++ ============= ============ ======== ++ Haswell 06_3CH All ++ Haswell_L 06_45H All ++ Haswell_G 06_46H All ++ ++ Broadwell_G 06_47H All ++ Broadwell 06_3DH All ++ ++ Skylake_L 06_4EH All ++ Skylake 06_5EH All ++ ++ Kabylake_L 06_8EH <=0xC ++ ++ Kabylake 06_9EH <=0xD ++ ============= ============ ======== ++ ++Related CVEs ++------------ ++ ++The following CVE entry is related to this SRBDS issue: ++ ++ ============== ===== ===================================== ++ CVE-2020-0543 SRBDS Special Register Buffer Data Sampling ++ ============== ===== ===================================== ++ ++Attack scenarios ++---------------- ++An unprivileged user can extract values returned from RDRAND and RDSEED ++executed on another core or sibling thread using MDS techniques. ++ ++ ++Mitigation mechanism ++------------------- ++Intel will release microcode updates that modify the RDRAND, RDSEED, and ++EGETKEY instructions to overwrite secret special register data in the shared ++staging buffer before the secret data can be accessed by another logical ++processor. ++ ++During execution of the RDRAND, RDSEED, or EGETKEY instructions, off-core ++accesses from other logical processors will be delayed until the special ++register read is complete and the secret data in the shared staging buffer is ++overwritten. ++ ++This has three effects on performance: ++ ++#. RDRAND, RDSEED, or EGETKEY instructions have higher latency. ++ ++#. Executing RDRAND at the same time on multiple logical processors will be ++ serialized, resulting in an overall reduction in the maximum RDRAND ++ bandwidth. ++ ++#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other ++ logical processors that miss their core caches, with an impact similar to ++ legacy locked cache-line-split accesses. ++ ++The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable ++the mitigation for RDRAND and RDSEED instructions executed outside of Intel ++Software Guard Extensions (Intel SGX) enclaves. On logical processors that ++disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not ++take longer to execute and do not impact performance of sibling logical ++processors memory accesses. The opt-out mechanism does not affect Intel SGX ++enclaves (including execution of RDRAND or RDSEED inside an enclave, as well ++as EGETKEY execution). ++ ++IA32_MCU_OPT_CTRL MSR Definition ++-------------------------------- ++Along with the mitigation for this issue, Intel added a new thread-scope ++IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and ++RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL = ++9]==1. This MSR is introduced through the microcode update. ++ ++Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor ++disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX ++enclave on that logical processor. Opting out of the mitigation for a ++particular logical processor does not affect the RDRAND and RDSEED mitigations ++for other logical processors. ++ ++Note that inside of an Intel SGX enclave, the mitigation is applied regardless ++of the value of RNGDS_MITG_DS. ++ ++Mitigation control on the kernel command line ++--------------------------------------------- ++The kernel command line allows control over the SRBDS mitigation at boot time ++with the option "srbds=". The option for this is: ++ ++ ============= ============================================================= ++ off This option disables SRBDS mitigation for RDRAND and RDSEED on ++ affected platforms. ++ ============= ============================================================= ++ ++SRBDS System Information ++----------------------- ++The Linux kernel provides vulnerability status information through sysfs. For ++SRBDS this can be accessed by the following sysfs file: ++/sys/devices/system/cpu/vulnerabilities/srbds ++ ++The possible values contained in this file are: ++ ++ ============================== ============================================= ++ Not affected Processor not vulnerable ++ Vulnerable Processor vulnerable and mitigation disabled ++ Vulnerable: No microcode Processor vulnerable and microcode is missing ++ mitigation ++ Mitigation: Microcode Processor is vulnerable and mitigation is in ++ effect. ++ Mitigation: TSX disabled Processor is only vulnerable when TSX is ++ enabled while this system was booted with TSX ++ disabled. ++ Unknown: Dependent on ++ hypervisor status Running on virtual guest processor that is ++ affected but with no way to know if host ++ processor is mitigated or vulnerable. ++ ============================== ============================================= ++ ++SRBDS Default mitigation ++------------------------ ++This new microcode serializes processor access during execution of RDRAND, ++RDSEED ensures that the shared buffer is overwritten before it is released for ++reuse. Use the "srbds=off" kernel command line to disable the mitigation for ++RDRAND and RDSEED. |