diff options
author | Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> | 2014-02-11 11:16:18 +0530 |
---|---|---|
committer | Eli Qiao <taget@linux.vnet.ibm.com> | 2014-02-12 10:28:54 +0800 |
commit | 5b98c77b816318f275a84bcfd9017e042a97d1af (patch) | |
tree | d9c864a349b57ee6d3e303be476ecb8c1926c66b | |
parent | cc0303a7890980469f3d33a625998333ed42aa8c (diff) | |
download | powerkvm-5b98c77b816318f275a84bcfd9017e042a97d1af.tar.gz |
powerpc/powernv: Respect max_cpus while initializing core split/unsplit feature.
In kdump kernel we see a hang during subcore_init() at
unsplit_core()->wait_for_sync_step(). In kdump kernel we always boot with
maxcpus=1 and all other cpus are waiting inside OPAL, hence with 1 online
cpu the master thread keep waiting on secondary threads to set split_state
indefinitely. This is even true for all cases where max_cpus is not aligned
with threads_per_core. This patch fixes this issue by disabling
core split/unsplit feature if max_cpus are not aligned with threads_per_core.
This also fixes kdump hang issue.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
-rw-r--r-- | arch/powerpc/platforms/powernv/subcore.c | 14 |
1 files changed, 13 insertions, 1 deletions
diff --git a/arch/powerpc/platforms/powernv/subcore.c b/arch/powerpc/platforms/powernv/subcore.c index 75d0624673c10..1728ed8f28706 100644 --- a/arch/powerpc/platforms/powernv/subcore.c +++ b/arch/powerpc/platforms/powernv/subcore.c @@ -287,9 +287,14 @@ static int cpu_update_split_mode(void *data) if (this_cpu_ptr(&split_state)->master) { /* Wait for all cpus to finish before we touch subcores_per_core */ - for_each_present_cpu(cpu) + for_each_present_cpu(cpu) { + /* Check if we reached maxcpus */ + if (cpu >= setup_max_cpus) + break; + while(per_cpu(split_state, cpu).step < SYNC_STEP_FINISHED) barrier(); + } new_split_mode = 0; @@ -390,6 +395,13 @@ static int subcore_init(void) if (!cpu_has_feature(CPU_FTR_ARCH_207S)) return 0; + /* + * Respect max_cpus kernel parameter. + * Continue only if max_cpus are aligned to threads_per_core. + */ + if (setup_max_cpus % threads_per_core) + return 0; + BUG_ON(!alloc_cpumask_var(&cpu_offline_mask, GFP_KERNEL)); set_subcores_per_core(1); |