summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorSebastian Andrzej Siewior <bigeasy@linutronix.de>2016-01-22 23:56:10 +0100
committerSebastian Andrzej Siewior <bigeasy@linutronix.de>2016-01-22 23:56:10 +0100
commit18b0ea1ce8ddca86b3f09a9122a3c722b265dfb8 (patch)
tree5c77303303e2b5154e1dd8555b16a5196c553d62
parentc5f0ba59007d49527ec8c89cd6ff721a8f5c294e (diff)
download4.9-rt-patches-18b0ea1ce8ddca86b3f09a9122a3c722b265dfb8.tar.gz
[ANNOUNCE] 4.4-rt3
Dear RT folks! I'm pleased to announce the v4.4-rt3 patch set. Changes since v4.4-rt2: - various compile fixes found by kbuild test robot and Grygorii Strashko. - kbuild test robot reported that we open interrupts too early in ptrace_freeze_traced(). - dropping a GPIO patch from the OMAP queue which is no longer required (requested by Grygorii Strashko) - dropping a retry loop in the mm/anon_vma_free() which was probably just duct tape and does no longer seems required. - Various people pointed out that the AT91 clocksource driver did not not compile. It does now. However AT91 does not yet boot. There are two issues: - the free_irq() from irq-off region is not good and triggers a warning because it is invoked twice. This will be addressed later, the current patch is not bulletproof and not yet part of the series. - The PMC driver invokes request_irq() very early which leads to a NULL pointer exception (non-RT with threaded interrupts has the same problem). A longer explanation by Alexandre Belloni and his current patch series he refers to can be found at: http://lkml.kernel.org/r/1452997394-8554-1-git-send-email-alexandre.belloni@free-electrons.com - Using a virtual network device (like a bridge) could lead to a "Dead loop" message the packet dropped. This problem has been fixed. - Julia Lawall sent a patch against hwlat_detector to "move constants to the right of binary operators". - Carsten Emde sent a patch to fix the latency histogram tracer. - Mike Galbraith reported that the softirq ate about 25% CPU time doing nothing. The problem was fixed. - Grygorii Strashko pointed out that two RCU/ksoftirqd changes that were made to the non-RT version of the code did not make to the RT version. This was corrected. - btrfs forgot to initialize a seqcount variable which prints a warning if used with lockdep. - A few users napi_alloc_cache() were not protected against reentrance. - Grygorii Strashko fixed highmem on ARM. - Mike Galbraith reported that all tasks run on CPU0 even on a system with more than one. Problem fixed by Thomas Gleixner. - Anders Roxell sent two patches (against coupled and vsp1) because they did not compile and printed a warning on -RT. - Mike Galbraith pointed out that we forgot to check for NEED_RESCHED_LAZY in an exit path on X86 and provided a patch. - Mike Galbraith pointed out that we don't consider the preempt_lazy_count in the common preemption check and provided a patch. With this fixed, the sched_other performance should improve. - A high network load could lead to RCU stalls followed by the OOM killer. Say a slower ARM with on a GBIT link running RT tasks, doing network IO (at a RT prio) and getting shot with the flood ping at a high rate. NAPI does not really kick in because each time NAPI tries defer processing it starts again in the context of the IRQ thread of the network driver. This has been fixed in two steps: - once the NAPI budget is up, we schedule ksoftirqd. This works now on -RT, too - ksoftirqd runs now at SCHED_OTHER priority like the on !RT. Now the scheduler can preempt ksoftirqd and let RCU do its job. The timer and hrtimer softirq processing happens now in ktimersoftd which runs at SCHED_FIFO (as ksoftirqd used to). - Grygorii Strashko pointed out that if RCU_EXPERT is not enabled then we can't select RCU_BOOST. Therefore RCU_EXPERT is default y on RT. - Grygorii Strashko pointed out the we miss to check for NEED_RESCHED_LAZY in an exit path on ARM. This has been fixed on ARM and on ARM64 as well. This was a lot and I hope I forgot nothing important. Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information The delta patch against 4.4-rt2 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/incr/patch-4.4-rt2-rt3.patch You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.4-rt3 The RT patch against 4.1.13 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patch-4.4-rt3.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patches-4.4-rt3.tar.xz Sebastian Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-rw-r--r--patches/0009-ARM-OMAP2-Drop-the-concept-of-certain-power-domains-.patch177
-rw-r--r--patches/arm-arm64-lazy-preempt-add-TIF_NEED_RESCHED_LAZY-to-.patch76
-rw-r--r--patches/arm-at91-pit-remove-irq-handler-when-clock-is-unused.patch35
-rw-r--r--patches/arm-enable-highmem-for-rt.patch50
-rw-r--r--patches/btrfs-initialize-the-seq-counter-in-struct-btrfs_dev.patch36
-rw-r--r--patches/completion-use-simple-wait-queues.patch4
-rw-r--r--patches/cond-resched-softirq-rt.patch4
-rw-r--r--patches/cpu-rt-rework-cpu-down.patch14
-rw-r--r--patches/cpu_chill-Add-a-UNINTERRUPTIBLE-hrtimer_nanosleep.patch12
-rw-r--r--patches/drivers-cpuidle-coupled-fix-warning-cpuidle_coupled_.patch25
-rw-r--r--patches/drivers-media-vsp1_video-fix-compile-error.patch32
-rw-r--r--patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch14
-rw-r--r--patches/hrtimer-enfore-64byte-alignment.patch5
-rw-r--r--patches/hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch62
-rw-r--r--patches/hrtimers-prepare-full-preemption.patch6
-rw-r--r--patches/hwlatdetect.patch10
-rw-r--r--patches/introduce_migrate_disable_cpu_light.patch133
-rw-r--r--patches/ipc-msg-Implement-lockless-pipelined-wakeups.patch3
-rw-r--r--patches/kernel-SRCU-provide-a-static-initializer.patch2
-rw-r--r--patches/latency-hist.patch34
-rw-r--r--patches/localversion.patch4
-rw-r--r--patches/mm-rmap-retry-lock-check-in-anon_vma_free.patch_vma_free.patch52
-rw-r--r--patches/net-another-local-irq-disable-alloc-atomic-headache.patch20
-rw-r--r--patches/net-core-protect-users-of-napi_alloc_cache-against-r.patch76
-rw-r--r--patches/net-move-xmit_recursion-to-per-task-variable-on-RT.patch125
-rw-r--r--patches/net-provide-a-way-to-delegate-processing-a-softirq-t.patch78
-rw-r--r--patches/net-tx-action-avoid-livelock-on-rt.patch4
-rw-r--r--patches/preempt-lazy-check-preempt_schedule.patch73
-rw-r--r--patches/preempt-lazy-support.patch16
-rw-r--r--patches/ptrace-don-t-open-IRQs-in-ptrace_freeze_traced-too-e.patch34
-rw-r--r--patches/ptrace-fix-ptrace-vs-tasklist_lock-race.patch6
-rw-r--r--patches/rcu-make-RCU_BOOST-default-on-RT.patch14
-rw-r--r--patches/rt-introduce-cpu-chill.patch2
-rw-r--r--patches/rtmutex-Use-chainwalking-control-enum.patch3
-rw-r--r--patches/sched-might-sleep-do-not-account-rcu-depth.patch2
-rw-r--r--patches/sched-mmdrop-delayed.patch8
-rw-r--r--patches/sched-provide-a-tsk_nr_cpus_allowed-helper.patch261
-rw-r--r--patches/sched-rt-mutex-wakeup.patch4
-rw-r--r--patches/sched-ttwu-ensure-success-return-is-correct.patch2
-rw-r--r--patches/sched-use-tsk_cpus_allowed-instead-of-accessing-cpus.patch57
-rw-r--r--patches/sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch2
-rw-r--r--patches/series32
-rw-r--r--patches/softirq-disable-softirq-stacks-for-rt.patch2
-rw-r--r--patches/softirq-split-locks.patch10
-rw-r--r--patches/softirq-split-timer-softirqs-out-of-ksoftirqd.patch207
-rw-r--r--patches/sparc64-use-generic-rwsem-spinlocks-rt.patch3
-rw-r--r--patches/tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch6
-rw-r--r--patches/timers-preempt-rt-support.patch2
-rw-r--r--patches/timers-prepare-for-full-preemption.patch13
-rw-r--r--patches/trace-latency-hist-Consider-new-argument-when-probin.patch37
-rw-r--r--patches/upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch4
-rw-r--r--patches/workqueue-distangle-from-rq-lock.patch12
-rw-r--r--patches/workqueue-prevent-deadlock-stall.patch4
-rw-r--r--patches/x86-preempt-lazy.patch13
54 files changed, 1433 insertions, 489 deletions
diff --git a/patches/0009-ARM-OMAP2-Drop-the-concept-of-certain-power-domains-.patch b/patches/0009-ARM-OMAP2-Drop-the-concept-of-certain-power-domains-.patch
deleted file mode 100644
index bb5602e570643e..00000000000000
--- a/patches/0009-ARM-OMAP2-Drop-the-concept-of-certain-power-domains-.patch
+++ /dev/null
@@ -1,177 +0,0 @@
-From 70f4293bd36740fd730ab25abe39281d1b312365 Mon Sep 17 00:00:00 2001
-From: Russ Dill <Russ.Dill@ti.com>
-Date: Wed, 5 Aug 2015 15:30:44 +0530
-Subject: [PATCH 09/21] ARM: OMAP2: Drop the concept of certain power domains
- not being able to lose context.
-
-It isn't much of a win, and with hibernation, everything loses context.
-
-Signed-off-by: Russ Dill <Russ.Dill@ti.com>
-[j-keerthy@ti.com] ported to 4.1
-Signed-off-by: Keerthy <j-keerthy@ti.com>
----
- arch/arm/mach-omap2/gpio.c | 1
- arch/arm/mach-omap2/powerdomain.c | 40 --------------------------------
- arch/arm/mach-omap2/powerdomain.h | 1
- drivers/gpio/gpio-omap.c | 36 +++++++++++-----------------
- include/linux/platform_data/gpio-omap.h | 1
- 5 files changed, 14 insertions(+), 65 deletions(-)
-
---- a/arch/arm/mach-omap2/gpio.c
-+++ b/arch/arm/mach-omap2/gpio.c
-@@ -130,7 +130,6 @@ static int __init omap2_gpio_dev_init(st
- }
-
- pwrdm = omap_hwmod_get_pwrdm(oh);
-- pdata->loses_context = pwrdm_can_ever_lose_context(pwrdm);
-
- pdev = omap_device_build(name, id - 1, oh, pdata, sizeof(*pdata));
- kfree(pdata);
---- a/arch/arm/mach-omap2/powerdomain.c
-+++ b/arch/arm/mach-omap2/powerdomain.c
-@@ -1166,43 +1166,3 @@ int pwrdm_get_context_loss_count(struct
- return count;
- }
-
--/**
-- * pwrdm_can_ever_lose_context - can this powerdomain ever lose context?
-- * @pwrdm: struct powerdomain *
-- *
-- * Given a struct powerdomain * @pwrdm, returns 1 if the powerdomain
-- * can lose either memory or logic context or if @pwrdm is invalid, or
-- * returns 0 otherwise. This function is not concerned with how the
-- * powerdomain registers are programmed (i.e., to go off or not); it's
-- * concerned with whether it's ever possible for this powerdomain to
-- * go off while some other part of the chip is active. This function
-- * assumes that every powerdomain can go to either ON or INACTIVE.
-- */
--bool pwrdm_can_ever_lose_context(struct powerdomain *pwrdm)
--{
-- int i;
--
-- if (!pwrdm) {
-- pr_debug("powerdomain: %s: invalid powerdomain pointer\n",
-- __func__);
-- return 1;
-- }
--
-- if (pwrdm->pwrsts & PWRSTS_OFF)
-- return 1;
--
-- if (pwrdm->pwrsts & PWRSTS_RET) {
-- if (pwrdm->pwrsts_logic_ret & PWRSTS_OFF)
-- return 1;
--
-- for (i = 0; i < pwrdm->banks; i++)
-- if (pwrdm->pwrsts_mem_ret[i] & PWRSTS_OFF)
-- return 1;
-- }
--
-- for (i = 0; i < pwrdm->banks; i++)
-- if (pwrdm->pwrsts_mem_on[i] & PWRSTS_OFF)
-- return 1;
--
-- return 0;
--}
---- a/arch/arm/mach-omap2/powerdomain.h
-+++ b/arch/arm/mach-omap2/powerdomain.h
-@@ -244,7 +244,6 @@ int pwrdm_state_switch(struct powerdomai
- int pwrdm_pre_transition(struct powerdomain *pwrdm);
- int pwrdm_post_transition(struct powerdomain *pwrdm);
- int pwrdm_get_context_loss_count(struct powerdomain *pwrdm);
--bool pwrdm_can_ever_lose_context(struct powerdomain *pwrdm);
-
- extern int omap_set_pwrdm_state(struct powerdomain *pwrdm, u8 state);
-
---- a/drivers/gpio/gpio-omap.c
-+++ b/drivers/gpio/gpio-omap.c
-@@ -69,7 +69,7 @@ struct gpio_bank {
- struct device *dev;
- bool is_mpuio;
- bool dbck_flag;
-- bool loses_context;
-+
- bool context_valid;
- int stride;
- u32 width;
-@@ -1208,15 +1208,9 @@ static int omap_gpio_probe(struct platfo
- #ifdef CONFIG_OF_GPIO
- bank->chip.of_node = of_node_get(node);
- #endif
-- if (node) {
-- if (!of_property_read_bool(node, "ti,gpio-always-on"))
-- bank->loses_context = true;
-- } else {
-- bank->loses_context = pdata->loses_context;
--
-- if (bank->loses_context)
-- bank->get_context_loss_count =
-- pdata->get_context_loss_count;
-+ if (!node) {
-+ bank->get_context_loss_count =
-+ pdata->get_context_loss_count;
- }
-
- if (bank->regs->set_dataout && bank->regs->clr_dataout)
-@@ -1373,7 +1367,7 @@ static int omap_gpio_runtime_resume(stru
- * been initialised and so initialise it now. Also initialise
- * the context loss count.
- */
-- if (bank->loses_context && !bank->context_valid) {
-+ if (!bank->context_valid) {
- omap_gpio_init_context(bank);
-
- if (bank->get_context_loss_count)
-@@ -1394,17 +1388,15 @@ static int omap_gpio_runtime_resume(stru
- writel_relaxed(bank->context.risingdetect,
- bank->base + bank->regs->risingdetect);
-
-- if (bank->loses_context) {
-- if (!bank->get_context_loss_count) {
-+ if (!bank->get_context_loss_count) {
-+ omap_gpio_restore_context(bank);
-+ } else {
-+ c = bank->get_context_loss_count(bank->dev);
-+ if (c != bank->context_loss_count) {
- omap_gpio_restore_context(bank);
- } else {
-- c = bank->get_context_loss_count(bank->dev);
-- if (c != bank->context_loss_count) {
-- omap_gpio_restore_context(bank);
-- } else {
-- raw_spin_unlock_irqrestore(&bank->lock, flags);
-- return 0;
-- }
-+ spin_unlock_irqrestore(&bank->lock, flags);
-+ return 0;
- }
- }
-
-@@ -1476,7 +1468,7 @@ void omap2_gpio_prepare_for_idle(int pwr
- struct gpio_bank *bank;
-
- list_for_each_entry(bank, &omap_gpio_list, node) {
-- if (!BANK_USED(bank) || !bank->loses_context)
-+ if (!BANK_USED(bank))
- continue;
-
- bank->power_mode = pwr_mode;
-@@ -1490,7 +1482,7 @@ void omap2_gpio_resume_after_idle(void)
- struct gpio_bank *bank;
-
- list_for_each_entry(bank, &omap_gpio_list, node) {
-- if (!BANK_USED(bank) || !bank->loses_context)
-+ if (!BANK_USED(bank))
- continue;
-
- pm_runtime_get_sync(bank->dev);
---- a/include/linux/platform_data/gpio-omap.h
-+++ b/include/linux/platform_data/gpio-omap.h
-@@ -198,7 +198,6 @@ struct omap_gpio_platform_data {
- int bank_width; /* GPIO bank width */
- int bank_stride; /* Only needed for omap1 MPUIO */
- bool dbck_flag; /* dbck required or not - True for OMAP3&4 */
-- bool loses_context; /* whether the bank would ever lose context */
- bool is_mpuio; /* whether the bank is of type MPUIO */
- u32 non_wakeup_gpios;
-
diff --git a/patches/arm-arm64-lazy-preempt-add-TIF_NEED_RESCHED_LAZY-to-.patch b/patches/arm-arm64-lazy-preempt-add-TIF_NEED_RESCHED_LAZY-to-.patch
new file mode 100644
index 00000000000000..cf5ff52f9bdefc
--- /dev/null
+++ b/patches/arm-arm64-lazy-preempt-add-TIF_NEED_RESCHED_LAZY-to-.patch
@@ -0,0 +1,76 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Fri, 22 Jan 2016 21:33:39 +0100
+Subject: arm+arm64: lazy-preempt: add TIF_NEED_RESCHED_LAZY to _TIF_WORK_MASK
+
+_TIF_WORK_MASK is used to check for TIF_NEED_RESCHED so we need to check
+for TIF_NEED_RESCHED_LAZY here, too.
+
+Reported-by: Grygorii Strashko <grygorii.strashko@ti.com>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ arch/arm/include/asm/thread_info.h | 7 ++++---
+ arch/arm/kernel/entry-common.S | 9 +++++++--
+ arch/arm64/include/asm/thread_info.h | 3 ++-
+ 3 files changed, 13 insertions(+), 6 deletions(-)
+
+--- a/arch/arm/include/asm/thread_info.h
++++ b/arch/arm/include/asm/thread_info.h
+@@ -143,8 +143,8 @@ extern int vfp_restore_user_hwstate(stru
+ #define TIF_SYSCALL_TRACE 4 /* syscall trace active */
+ #define TIF_SYSCALL_AUDIT 5 /* syscall auditing active */
+ #define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */
+-#define TIF_SECCOMP 7 /* seccomp syscall filtering active */
+-#define TIF_NEED_RESCHED_LAZY 8
++#define TIF_SECCOMP 8 /* seccomp syscall filtering active */
++#define TIF_NEED_RESCHED_LAZY 7
+
+ #define TIF_NOHZ 12 /* in adaptive nohz mode */
+ #define TIF_USING_IWMMXT 17
+@@ -170,7 +170,8 @@ extern int vfp_restore_user_hwstate(stru
+ * Change these and you break ASM code in entry-common.S
+ */
+ #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+- _TIF_NOTIFY_RESUME | _TIF_UPROBE)
++ _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
++ _TIF_NEED_RESCHED_LAZY)
+
+ #endif /* __KERNEL__ */
+ #endif /* __ASM_ARM_THREAD_INFO_H */
+--- a/arch/arm/kernel/entry-common.S
++++ b/arch/arm/kernel/entry-common.S
+@@ -36,7 +36,9 @@
+ UNWIND(.cantunwind )
+ disable_irq_notrace @ disable interrupts
+ ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
+- tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
++ tst r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
++ bne fast_work_pending
++ tst r1, #_TIF_SECCOMP
+ bne fast_work_pending
+
+ /* perform architecture specific actions before user return */
+@@ -62,8 +64,11 @@ ENDPROC(ret_fast_syscall)
+ str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
+ disable_irq_notrace @ disable interrupts
+ ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
+- tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
++ tst r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
++ bne do_slower_path
++ tst r1, #_TIF_SECCOMP
+ beq no_work_pending
++do_slower_path:
+ UNWIND(.fnend )
+ ENDPROC(ret_fast_syscall)
+
+--- a/arch/arm64/include/asm/thread_info.h
++++ b/arch/arm64/include/asm/thread_info.h
+@@ -129,7 +129,8 @@ static inline struct thread_info *curren
+ #define _TIF_32BIT (1 << TIF_32BIT)
+
+ #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+- _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
++ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
++ _TIF_NEED_RESCHED_LAZY)
+
+ #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
+ _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
diff --git a/patches/arm-at91-pit-remove-irq-handler-when-clock-is-unused.patch b/patches/arm-at91-pit-remove-irq-handler-when-clock-is-unused.patch
index b5a3fbc3dfbba5..f6201495041f06 100644
--- a/patches/arm-at91-pit-remove-irq-handler-when-clock-is-unused.patch
+++ b/patches/arm-at91-pit-remove-irq-handler-when-clock-is-unused.patch
@@ -13,21 +13,24 @@ commit 8fe82a55 ("ARM: at91: sparse irq support") which is included since v3.6.
Patch based on what Sami Pietikäinen <Sami.Pietikainen@wapice.com> suggested].
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
- drivers/clocksource/timer-atmel-pit.c | 15 ++++++++-------
- drivers/clocksource/timer-atmel-st.c | 28 ++++++++++++++++++++--------
- 2 files changed, 28 insertions(+), 15 deletions(-)
+ drivers/clocksource/timer-atmel-pit.c | 17 +++++++++--------
+ drivers/clocksource/timer-atmel-st.c | 32 ++++++++++++++++++++++----------
+ 2 files changed, 31 insertions(+), 18 deletions(-)
--- a/drivers/clocksource/timer-atmel-pit.c
+++ b/drivers/clocksource/timer-atmel-pit.c
-@@ -96,6 +96,7 @@ static int pit_clkevt_shutdown(struct cl
+@@ -96,15 +96,24 @@ static int pit_clkevt_shutdown(struct cl
/* disable irq, leaving the clocksource active */
pit_write(data->base, AT91_PIT_MR, (data->cycle - 1) | AT91_PIT_PITEN);
-+ free_irq(atmel_pit_irq, data);
++ free_irq(data->irq, data);
return 0;
}
-@@ -105,6 +106,13 @@ static int pit_clkevt_shutdown(struct cl
++static irqreturn_t at91sam926x_pit_interrupt(int irq, void *dev_id);
+ /*
+ * Clockevent device: interrupts every 1/HZ (== pit_cycles * MCK/16)
+ */
static int pit_clkevt_set_periodic(struct clock_event_device *dev)
{
struct pit_data *data = clkevt_to_pit_data(dev);
@@ -41,6 +44,14 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* update clocksource counter */
data->cnt += data->cycle * PIT_PICNT(pit_read(data->base, AT91_PIT_PIVR));
+@@ -181,7 +190,6 @@ static void __init at91sam926x_pit_commo
+ {
+ unsigned long pit_rate;
+ unsigned bits;
+- int ret;
+
+ /*
+ * Use our actual MCK to figure out how many MCK/16 ticks per
@@ -206,13 +214,6 @@ static void __init at91sam926x_pit_commo
data->clksrc.flags = CLOCK_SOURCE_IS_CONTINUOUS;
clocksource_register_hz(&data->clksrc, pit_rate);
@@ -91,7 +102,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static int clkevt32k_set_periodic(struct clock_event_device *dev)
{
-+ int irq;
++ int ret;
+
clkdev32k_disable_and_flush_irq();
@@ -113,8 +124,14 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
regmap_st = syscon_node_to_regmap(node);
if (IS_ERR(regmap_st))
-@@ -214,13 +233,6 @@ static void __init atmel_st_timer_init(s
- if (!irq)
+@@ -210,17 +229,10 @@ static void __init atmel_st_timer_init(s
+ regmap_read(regmap_st, AT91_ST_SR, &val);
+
+ /* Get the interrupts property */
+- irq = irq_of_parse_and_map(node, 0);
+- if (!irq)
++ atmel_st_irq = irq_of_parse_and_map(node, 0);
++ if (!atmel_st_irq)
panic(pr_fmt("Unable to get IRQ from DT\n"));
- /* Make IRQs happen for the system timer */
diff --git a/patches/arm-enable-highmem-for-rt.patch b/patches/arm-enable-highmem-for-rt.patch
index baafd26b86e765..a3535b8f53d73b 100644
--- a/patches/arm-enable-highmem-for-rt.patch
+++ b/patches/arm-enable-highmem-for-rt.patch
@@ -6,10 +6,10 @@ fixup highmem for ARM.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
- arch/arm/include/asm/switch_to.h | 8 ++++++
- arch/arm/mm/highmem.c | 45 ++++++++++++++++++++++++++++++++++-----
+ arch/arm/include/asm/switch_to.h | 8 +++++
+ arch/arm/mm/highmem.c | 56 +++++++++++++++++++++++++++++++++------
include/linux/highmem.h | 1
- 3 files changed, 49 insertions(+), 5 deletions(-)
+ 3 files changed, 57 insertions(+), 8 deletions(-)
--- a/arch/arm/include/asm/switch_to.h
+++ b/arch/arm/include/asm/switch_to.h
@@ -37,7 +37,19 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
-@@ -54,12 +54,13 @@ EXPORT_SYMBOL(kunmap);
+@@ -34,6 +34,11 @@ static inline pte_t get_fixmap_pte(unsig
+ return *ptep;
+ }
+
++static unsigned int fixmap_idx(int type)
++{
++ return FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
++}
++
+ void *kmap(struct page *page)
+ {
+ might_sleep();
+@@ -54,12 +59,13 @@ EXPORT_SYMBOL(kunmap);
void *kmap_atomic(struct page *page)
{
@@ -52,7 +64,16 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
-@@ -93,7 +94,10 @@ void *kmap_atomic(struct page *page)
+@@ -79,7 +85,7 @@ void *kmap_atomic(struct page *page)
+
+ type = kmap_atomic_idx_push();
+
+- idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
++ idx = fixmap_idx(type);
+ vaddr = __fix_to_virt(idx);
+ #ifdef CONFIG_DEBUG_HIGHMEM
+ /*
+@@ -93,7 +99,10 @@ void *kmap_atomic(struct page *page)
* in place, so the contained TLB flush ensures the TLB is updated
* with the new mapping.
*/
@@ -64,7 +85,12 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
return (void *)vaddr;
}
-@@ -110,6 +114,9 @@ void __kunmap_atomic(void *kvaddr)
+@@ -106,10 +115,13 @@ void __kunmap_atomic(void *kvaddr)
+
+ if (kvaddr >= (void *)FIXADDR_START) {
+ type = kmap_atomic_idx();
+- idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
++ idx = fixmap_idx(type);
if (cache_is_vivt())
__cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE);
@@ -74,7 +100,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(vaddr != __fix_to_virt(idx));
#else
-@@ -122,17 +129,18 @@ void __kunmap_atomic(void *kvaddr)
+@@ -122,28 +134,56 @@ void __kunmap_atomic(void *kvaddr)
kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)]));
}
pagefault_enable();
@@ -95,7 +121,11 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
-@@ -143,7 +151,34 @@ void *kmap_atomic_pfn(unsigned long pfn)
+
+ type = kmap_atomic_idx_push();
+- idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
++ idx = fixmap_idx(type);
+ vaddr = __fix_to_virt(idx);
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(get_fixmap_pte(vaddr)));
#endif
@@ -116,7 +146,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ * Clear @prev's kmap_atomic mappings
+ */
+ for (i = 0; i < prev_p->kmap_idx; i++) {
-+ int idx = i + KM_TYPE_NR * smp_processor_id();
++ int idx = fixmap_idx(i);
+
+ set_fixmap_pte(idx, __pte(0));
+ }
@@ -124,7 +154,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ * Restore @next_p's kmap_atomic mappings
+ */
+ for (i = 0; i < next_p->kmap_idx; i++) {
-+ int idx = i + KM_TYPE_NR * smp_processor_id();
++ int idx = fixmap_idx(i);
+
+ if (!pte_none(next_p->kmap_pte[i]))
+ set_fixmap_pte(idx, next_p->kmap_pte[i]);
diff --git a/patches/btrfs-initialize-the-seq-counter-in-struct-btrfs_dev.patch b/patches/btrfs-initialize-the-seq-counter-in-struct-btrfs_dev.patch
new file mode 100644
index 00000000000000..0d86e259c45734
--- /dev/null
+++ b/patches/btrfs-initialize-the-seq-counter-in-struct-btrfs_dev.patch
@@ -0,0 +1,36 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Fri, 15 Jan 2016 14:28:39 +0100
+Subject: btrfs: initialize the seq counter in struct btrfs_device
+
+I managed to trigger this:
+| INFO: trying to register non-static key.
+| the code is fine but needs lockdep annotation.
+| turning off the locking correctness validator.
+| CPU: 1 PID: 781 Comm: systemd-gpt-aut Not tainted 4.4.0-rt2+ #14
+| Hardware name: ARM-Versatile Express
+| [<80307cec>] (dump_stack)
+| [<80070e98>] (__lock_acquire)
+| [<8007184c>] (lock_acquire)
+| [<80287800>] (btrfs_ioctl)
+| [<8012a8d4>] (do_vfs_ioctl)
+| [<8012ac14>] (SyS_ioctl)
+
+so I think that btrfs_device_data_ordered_init() is not invoked behind
+a macro somewhere.
+
+Fixes: 7cc8e58d53cd ("Btrfs: fix unprotected device's variants on 32bits machine")
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ fs/btrfs/volumes.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -232,6 +232,7 @@ static struct btrfs_device *__alloc_devi
+ spin_lock_init(&dev->reada_lock);
+ atomic_set(&dev->reada_in_flight, 0);
+ atomic_set(&dev->dev_stats_ccnt, 0);
++ btrfs_device_data_ordered_init(dev);
+ INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+ INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+
diff --git a/patches/completion-use-simple-wait-queues.patch b/patches/completion-use-simple-wait-queues.patch
index 616a4e87f0ccc1..b7df99b5f255fb 100644
--- a/patches/completion-use-simple-wait-queues.patch
+++ b/patches/completion-use-simple-wait-queues.patch
@@ -198,7 +198,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
EXPORT_SYMBOL(completion_done);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -3143,7 +3143,10 @@ void migrate_disable(void)
+@@ -3102,7 +3102,10 @@ void migrate_disable(void)
}
#ifdef CONFIG_SCHED_DEBUG
@@ -210,7 +210,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#endif
if (p->migrate_disable) {
-@@ -3173,7 +3176,10 @@ void migrate_enable(void)
+@@ -3130,7 +3133,10 @@ void migrate_enable(void)
}
#ifdef CONFIG_SCHED_DEBUG
diff --git a/patches/cond-resched-softirq-rt.patch b/patches/cond-resched-softirq-rt.patch
index fa4f589c5ae9b3..03e2ccc0cd62e9 100644
--- a/patches/cond-resched-softirq-rt.patch
+++ b/patches/cond-resched-softirq-rt.patch
@@ -34,7 +34,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
{
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -4832,6 +4832,7 @@ int __cond_resched_lock(spinlock_t *lock
+@@ -4771,6 +4771,7 @@ int __cond_resched_lock(spinlock_t *lock
}
EXPORT_SYMBOL(__cond_resched_lock);
@@ -42,7 +42,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
int __sched __cond_resched_softirq(void)
{
BUG_ON(!in_softirq());
-@@ -4845,6 +4846,7 @@ int __sched __cond_resched_softirq(void)
+@@ -4784,6 +4785,7 @@ int __sched __cond_resched_softirq(void)
return 0;
}
EXPORT_SYMBOL(__cond_resched_softirq);
diff --git a/patches/cpu-rt-rework-cpu-down.patch b/patches/cpu-rt-rework-cpu-down.patch
index 416ab1e2858650..e7a88d94a5a3d4 100644
--- a/patches/cpu-rt-rework-cpu-down.patch
+++ b/patches/cpu-rt-rework-cpu-down.patch
@@ -56,7 +56,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
-@@ -2284,6 +2284,10 @@ extern void do_set_cpus_allowed(struct t
+@@ -2287,6 +2287,10 @@ extern void do_set_cpus_allowed(struct t
extern int set_cpus_allowed_ptr(struct task_struct *p,
const struct cpumask *new_mask);
@@ -67,7 +67,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#else
static inline void do_set_cpus_allowed(struct task_struct *p,
const struct cpumask *new_mask)
-@@ -2296,6 +2300,9 @@ static inline int set_cpus_allowed_ptr(s
+@@ -2299,6 +2303,9 @@ static inline int set_cpus_allowed_ptr(s
return -EINVAL;
return 0;
}
@@ -442,7 +442,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* interrupt affinities.
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -1220,6 +1220,84 @@ void do_set_cpus_allowed(struct task_str
+@@ -1211,6 +1211,84 @@ void do_set_cpus_allowed(struct task_str
enqueue_task(rq, p, ENQUEUE_RESTORE);
}
@@ -527,7 +527,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Change a given task's CPU affinity. Migrate the thread to a
* proper CPU and schedule it away if the CPU it's executing on
-@@ -3085,7 +3163,7 @@ void migrate_disable(void)
+@@ -3044,7 +3122,7 @@ void migrate_disable(void)
{
struct task_struct *p = current;
@@ -536,9 +536,9 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#ifdef CONFIG_SCHED_DEBUG
p->migrate_disable_atomic++;
#endif
-@@ -3118,7 +3196,7 @@ void migrate_enable(void)
- unsigned long flags;
- struct rq *rq;
+@@ -3075,7 +3153,7 @@ void migrate_enable(void)
+ {
+ struct task_struct *p = current;
- if (in_atomic() || p->flags & PF_NO_SETAFFINITY) {
+ if (in_atomic()) {
diff --git a/patches/cpu_chill-Add-a-UNINTERRUPTIBLE-hrtimer_nanosleep.patch b/patches/cpu_chill-Add-a-UNINTERRUPTIBLE-hrtimer_nanosleep.patch
index e40b98dd6011e7..1a0a39a0191006 100644
--- a/patches/cpu_chill-Add-a-UNINTERRUPTIBLE-hrtimer_nanosleep.patch
+++ b/patches/cpu_chill-Add-a-UNINTERRUPTIBLE-hrtimer_nanosleep.patch
@@ -33,7 +33,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
-@@ -1657,12 +1657,13 @@ void hrtimer_init_sleeper(struct hrtimer
+@@ -1656,12 +1656,13 @@ void hrtimer_init_sleeper(struct hrtimer
}
EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
@@ -49,7 +49,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
hrtimer_start_expires(&t->timer, mode);
if (likely(t->task))
-@@ -1704,7 +1705,8 @@ long __sched hrtimer_nanosleep_restart(s
+@@ -1703,7 +1704,8 @@ long __sched hrtimer_nanosleep_restart(s
HRTIMER_MODE_ABS);
hrtimer_set_expires_tv64(&t.timer, restart->nanosleep.expires);
@@ -59,7 +59,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto out;
rmtp = restart->nanosleep.rmtp;
-@@ -1721,8 +1723,10 @@ long __sched hrtimer_nanosleep_restart(s
+@@ -1720,8 +1722,10 @@ long __sched hrtimer_nanosleep_restart(s
return ret;
}
@@ -72,7 +72,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
{
struct restart_block *restart;
struct hrtimer_sleeper t;
-@@ -1735,7 +1739,7 @@ long hrtimer_nanosleep(struct timespec *
+@@ -1734,7 +1738,7 @@ long hrtimer_nanosleep(struct timespec *
hrtimer_init_on_stack(&t.timer, clockid, mode);
hrtimer_set_expires_range_ns(&t.timer, timespec_to_ktime(*rqtp), slack);
@@ -81,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto out;
/* Absolute timers do not update the rmtp value and restart: */
-@@ -1762,6 +1766,12 @@ long hrtimer_nanosleep(struct timespec *
+@@ -1761,6 +1765,12 @@ long hrtimer_nanosleep(struct timespec *
return ret;
}
@@ -94,7 +94,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
SYSCALL_DEFINE2(nanosleep, struct timespec __user *, rqtp,
struct timespec __user *, rmtp)
{
-@@ -1788,7 +1798,8 @@ void cpu_chill(void)
+@@ -1787,7 +1797,8 @@ void cpu_chill(void)
unsigned int freeze_flag = current->flags & PF_NOFREEZE;
current->flags |= PF_NOFREEZE;
diff --git a/patches/drivers-cpuidle-coupled-fix-warning-cpuidle_coupled_.patch b/patches/drivers-cpuidle-coupled-fix-warning-cpuidle_coupled_.patch
new file mode 100644
index 00000000000000..5c3d5cd9d58c1b
--- /dev/null
+++ b/patches/drivers-cpuidle-coupled-fix-warning-cpuidle_coupled_.patch
@@ -0,0 +1,25 @@
+From: Anders Roxell <anders.roxell@linaro.org>
+Date: Fri, 15 Jan 2016 20:21:12 +0100
+Subject: drivers/cpuidle: coupled: fix warning cpuidle_coupled_lock
+
+Used multi_v7_defconfig+PREEMPT_RT_FULL=y and this caused a compilation
+warning without this fix:
+../drivers/cpuidle/coupled.c:122:21: warning: 'cpuidle_coupled_lock'
+defined but not used [-Wunused-variable]
+
+Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ drivers/cpuidle/coupled.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+--- a/drivers/cpuidle/coupled.c
++++ b/drivers/cpuidle/coupled.c
+@@ -119,7 +119,6 @@ struct cpuidle_coupled {
+
+ #define CPUIDLE_COUPLED_NOT_IDLE (-1)
+
+-static DEFINE_MUTEX(cpuidle_coupled_lock);
+ static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
+
+ /*
diff --git a/patches/drivers-media-vsp1_video-fix-compile-error.patch b/patches/drivers-media-vsp1_video-fix-compile-error.patch
new file mode 100644
index 00000000000000..57d677d1b5d3fa
--- /dev/null
+++ b/patches/drivers-media-vsp1_video-fix-compile-error.patch
@@ -0,0 +1,32 @@
+From: Anders Roxell <anders.roxell@linaro.org>
+Date: Fri, 15 Jan 2016 01:09:43 +0100
+Subject: drivers/media: vsp1_video: fix compile error
+
+This was found with the -RT patch enabled, but the fix should apply to
+non-RT also.
+
+Compilation error without this fix:
+../drivers/media/platform/vsp1/vsp1_video.c: In function
+'vsp1_pipeline_stopped':
+../drivers/media/platform/vsp1/vsp1_video.c:524:2: error: expected
+expression before 'do'
+ spin_unlock_irqrestore(&pipe->irqlock, flags);
+ ^
+
+Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ drivers/media/platform/vsp1/vsp1_video.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/media/platform/vsp1/vsp1_video.c
++++ b/drivers/media/platform/vsp1/vsp1_video.c
+@@ -520,7 +520,7 @@ static bool vsp1_pipeline_stopped(struct
+ bool stopped;
+
+ spin_lock_irqsave(&pipe->irqlock, flags);
+- stopped = pipe->state == VSP1_PIPELINE_STOPPED,
++ stopped = pipe->state == VSP1_PIPELINE_STOPPED;
+ spin_unlock_irqrestore(&pipe->irqlock, flags);
+
+ return stopped;
diff --git a/patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch b/patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch
index 6617a5fcfc12f3..ad6cdadcf876cb 100644
--- a/patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch
+++ b/patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch
@@ -10,13 +10,21 @@ wakeup() a process while holding the lock.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
- include/linux/interrupt.h | 1
+ include/linux/interrupt.h | 2 +
kernel/irq/manage.c | 79 ++++++++++++++++++++++++++++++++++++++++++++--
- 2 files changed, 77 insertions(+), 3 deletions(-)
+ 2 files changed, 78 insertions(+), 3 deletions(-)
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
-@@ -217,6 +217,7 @@ struct irq_affinity_notify {
+@@ -206,6 +206,7 @@ extern void resume_device_irqs(void);
+ * @irq: Interrupt to which notification applies
+ * @kref: Reference count, for internal use
+ * @work: Work item, for internal use
++ * @list: List item for deferred callbacks
+ * @notify: Function to be called on change. This will be
+ * called in process context.
+ * @release: Function to be called on release. This will be
+@@ -217,6 +218,7 @@ struct irq_affinity_notify {
unsigned int irq;
struct kref kref;
struct work_struct work;
diff --git a/patches/hrtimer-enfore-64byte-alignment.patch b/patches/hrtimer-enfore-64byte-alignment.patch
index 336f03719098fa..ef6c1d1d622d23 100644
--- a/patches/hrtimer-enfore-64byte-alignment.patch
+++ b/patches/hrtimer-enfore-64byte-alignment.patch
@@ -1,7 +1,6 @@
-From e35e67cb032e78055b63eae5a3a370664fabfc01 Mon Sep 17 00:00:00 2001
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 23 Dec 2015 20:57:41 +0100
-Subject: [PATCH] hrtimer: enfore 64byte alignment
+Subject: hrtimer: enfore 64byte alignment
The patch "hrtimer: Fixup hrtimer callback changes for preempt-rt" adds
a list_head expired to struct hrtimer_clock_base and with it we run into
@@ -14,7 +13,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
-@@ -124,11 +124,7 @@ struct hrtimer_sleeper {
+@@ -125,11 +125,7 @@ struct hrtimer_sleeper {
struct task_struct *task;
};
diff --git a/patches/hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch b/patches/hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
index 42d03ce29a17bb..44a33a905a58e1 100644
--- a/patches/hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
+++ b/patches/hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
@@ -12,17 +12,26 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
- include/linux/hrtimer.h | 4 +
+ include/linux/hrtimer.h | 7 ++
kernel/sched/core.c | 1
kernel/sched/rt.c | 1
- kernel/time/hrtimer.c | 142 +++++++++++++++++++++++++++++++++++++++++++----
+ kernel/time/hrtimer.c | 137 +++++++++++++++++++++++++++++++++++++++++++----
kernel/time/tick-sched.c | 1
kernel/watchdog.c | 1
- 6 files changed, 139 insertions(+), 11 deletions(-)
+ 6 files changed, 139 insertions(+), 9 deletions(-)
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
-@@ -102,6 +102,8 @@ struct hrtimer {
+@@ -87,6 +87,8 @@ enum hrtimer_restart {
+ * @function: timer expiry callback function
+ * @base: pointer to the timer base (per cpu and per clock)
+ * @state: state information (See bit values above)
++ * @cb_entry: list entry to defer timers from hardirq context
++ * @irqsafe: timer can run in hardirq context
+ * @praecox: timer expiry time if expired at the time of programming
+ * @start_pid: timer statistics field to store the pid of the task which
+ * started the timer
+@@ -103,6 +105,8 @@ struct hrtimer {
enum hrtimer_restart (*function)(struct hrtimer *);
struct hrtimer_clock_base *base;
unsigned long state;
@@ -31,7 +40,15 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
ktime_t praecox;
#endif
-@@ -141,6 +143,7 @@ struct hrtimer_clock_base {
+@@ -134,6 +138,7 @@ struct hrtimer_sleeper {
+ * timer to a base on another cpu.
+ * @clockid: clock id for per_cpu support
+ * @active: red black tree root node for the active timers
++ * @expired: list head for deferred timers.
+ * @get_time: function to retrieve the current time of the clock
+ * @offset: offset of this clock to the monotonic base
+ */
+@@ -142,6 +147,7 @@ struct hrtimer_clock_base {
int index;
clockid_t clockid;
struct timerqueue_head active;
@@ -39,7 +56,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
ktime_t (*get_time)(void);
ktime_t offset;
} __attribute__((__aligned__(HRTIMER_CLOCK_BASE_ALIGN)));
-@@ -184,6 +187,7 @@ struct hrtimer_cpu_base {
+@@ -185,6 +191,7 @@ struct hrtimer_cpu_base {
raw_spinlock_t lock;
seqcount_t seq;
struct hrtimer *running;
@@ -120,7 +137,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
cpu_base->running == timer)
return true;
-@@ -1292,10 +1296,111 @@ static void __run_hrtimer(struct hrtimer
+@@ -1292,12 +1296,112 @@ static void __run_hrtimer(struct hrtimer
cpu_base->running = NULL;
}
@@ -223,7 +240,8 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
+
+#endif
+
-+
+ static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer);
+
static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
{
struct hrtimer_clock_base *base = cpu_base->clock_base;
@@ -232,31 +250,23 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
for (; active; base++, active >>= 1) {
struct timerqueue_node *node;
-@@ -1335,15 +1440,20 @@ static void __hrtimer_run_queues(struct
+@@ -1337,9 +1441,14 @@ static void __hrtimer_run_queues(struct
if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer))
break;
- __run_hrtimer(cpu_base, base, timer, &basenow);
-+ if (!hrtimer_rt_defer(timer))
-+ __run_hrtimer(cpu_base, base, timer, &basenow);
-+ else
-+ raise = 1;
++ if (!hrtimer_rt_defer(timer))
++ __run_hrtimer(cpu_base, base, timer, &basenow);
++ else
++ raise = 1;
}
}
+ if (raise)
+ raise_softirq_irqoff(HRTIMER_SOFTIRQ);
}
--#ifdef CONFIG_HIGH_RES_TIMERS
--
- static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer);
-
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+
- /*
- * High resolution timer interrupt
- * Called with interrupts disabled
-@@ -1481,8 +1591,6 @@ void hrtimer_run_queues(void)
+ #ifdef CONFIG_HIGH_RES_TIMERS
+@@ -1481,8 +1590,6 @@ void hrtimer_run_queues(void)
now = hrtimer_update_base(cpu_base);
__hrtimer_run_queues(cpu_base, now);
raw_spin_unlock(&cpu_base->lock);
@@ -265,7 +275,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
}
/*
-@@ -1504,6 +1612,7 @@ static enum hrtimer_restart hrtimer_wake
+@@ -1504,6 +1611,7 @@ static enum hrtimer_restart hrtimer_wake
void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task)
{
sl->timer.function = hrtimer_wakeup;
@@ -273,7 +283,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
sl->task = task;
}
EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
-@@ -1638,6 +1747,7 @@ static void init_hrtimers_cpu(int cpu)
+@@ -1638,6 +1746,7 @@ static void init_hrtimers_cpu(int cpu)
for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
cpu_base->clock_base[i].cpu_base = cpu_base;
timerqueue_init_head(&cpu_base->clock_base[i].active);
@@ -281,7 +291,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
}
cpu_base->cpu = cpu;
-@@ -1742,11 +1852,21 @@ static struct notifier_block hrtimers_nb
+@@ -1742,11 +1851,21 @@ static struct notifier_block hrtimers_nb
.notifier_call = hrtimer_cpu_notify,
};
diff --git a/patches/hrtimers-prepare-full-preemption.patch b/patches/hrtimers-prepare-full-preemption.patch
index bbcaa28560733c..33858ef4e28643 100644
--- a/patches/hrtimers-prepare-full-preemption.patch
+++ b/patches/hrtimers-prepare-full-preemption.patch
@@ -17,7 +17,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
-@@ -204,6 +204,9 @@ struct hrtimer_cpu_base {
+@@ -205,6 +205,9 @@ struct hrtimer_cpu_base {
unsigned int nr_hangs;
unsigned int max_hang_time;
#endif
@@ -27,7 +27,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES];
} ____cacheline_aligned;
-@@ -392,6 +395,13 @@ static inline void hrtimer_restart(struc
+@@ -393,6 +396,13 @@ static inline void hrtimer_restart(struc
hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
}
@@ -41,7 +41,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* Query timers: */
extern ktime_t hrtimer_get_remaining(const struct hrtimer *timer);
-@@ -411,7 +421,7 @@ static inline int hrtimer_is_queued(stru
+@@ -412,7 +422,7 @@ static inline int hrtimer_is_queued(stru
* Helper function to check, whether the timer is running the callback
* function
*/
diff --git a/patches/hwlatdetect.patch b/patches/hwlatdetect.patch
index aa24630f492c0e..bac7d137204976 100644
--- a/patches/hwlatdetect.patch
+++ b/patches/hwlatdetect.patch
@@ -724,7 +724,7 @@ Signed-off-by: Carsten Emde <C.Emde@osadl.org>
+
+ buf[sizeof(buf)-1] = '\0'; /* just in case */
+ err = kstrtoul(buf, 10, &val);
-+ if (0 != err)
++ if (err)
+ return -EINVAL;
+
+ if (val) {
@@ -1028,7 +1028,7 @@ Signed-off-by: Carsten Emde <C.Emde@osadl.org>
+
+ buf[U64STR_SIZE-1] = '\0'; /* just in case */
+ err = kstrtoull(buf, 10, &val);
-+ if (0 != err)
++ if (err)
+ return -EINVAL;
+
+ mutex_lock(&data.lock);
@@ -1112,7 +1112,7 @@ Signed-off-by: Carsten Emde <C.Emde@osadl.org>
+
+ buf[U64STR_SIZE-1] = '\0'; /* just in case */
+ err = kstrtoull(buf, 10, &val);
-+ if (0 != err)
++ if (err)
+ return -EINVAL;
+
+ mutex_lock(&data.lock);
@@ -1305,11 +1305,11 @@ Signed-off-by: Carsten Emde <C.Emde@osadl.org>
+ pr_info(BANNER "version %s\n", VERSION);
+
+ ret = init_stats();
-+ if (0 != ret)
++ if (ret)
+ goto out;
+
+ ret = init_debugfs();
-+ if (0 != ret)
++ if (ret)
+ goto err_stats;
+
+ if (enabled)
diff --git a/patches/introduce_migrate_disable_cpu_light.patch b/patches/introduce_migrate_disable_cpu_light.patch
index 8d5df970442520..3f268d8e16c717 100644
--- a/patches/introduce_migrate_disable_cpu_light.patch
+++ b/patches/introduce_migrate_disable_cpu_light.patch
@@ -30,14 +30,14 @@ invoked again from another caller on the same CPU.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
- include/linux/cpu.h | 3 +
- include/linux/preempt.h | 9 +++
- include/linux/sched.h | 29 +++++++++-
- include/linux/smp.h | 3 +
- kernel/sched/core.c | 132 +++++++++++++++++++++++++++++++++++++++++++++++-
- kernel/sched/debug.c | 7 ++
- lib/smp_processor_id.c | 5 +
- 7 files changed, 182 insertions(+), 6 deletions(-)
+ include/linux/cpu.h | 3 ++
+ include/linux/preempt.h | 9 ++++++
+ include/linux/sched.h | 39 ++++++++++++++++++++------
+ include/linux/smp.h | 3 ++
+ kernel/sched/core.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++-
+ kernel/sched/debug.c | 7 ++++
+ lib/smp_processor_id.c | 5 ++-
+ 7 files changed, 126 insertions(+), 11 deletions(-)
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -89,17 +89,22 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
int nr_cpus_allowed;
cpumask_t cpus_allowed;
-@@ -1837,9 +1843,6 @@ extern int arch_task_struct_size __read_
+@@ -1837,14 +1843,6 @@ extern int arch_task_struct_size __read_
# define arch_task_struct_size (sizeof(struct task_struct))
#endif
-/* Future-safe accessor for struct task_struct's cpus_allowed. */
-#define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed)
-
+-static inline int tsk_nr_cpus_allowed(struct task_struct *p)
+-{
+- return p->nr_cpus_allowed;
+-}
+-
#define TNF_MIGRATED 0x01
#define TNF_NO_GROUP 0x02
#define TNF_SHARED 0x04
-@@ -3116,6 +3119,26 @@ static inline void set_task_cpu(struct t
+@@ -3121,6 +3119,31 @@ static inline void set_task_cpu(struct t
#endif /* CONFIG_SMP */
@@ -115,14 +120,19 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+/* Future-safe accessor for struct task_struct's cpus_allowed. */
+static inline const struct cpumask *tsk_cpus_allowed(struct task_struct *p)
+{
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ if (p->migrate_disable)
++ if (__migrate_disabled(p))
+ return cpumask_of(task_cpu(p));
-+#endif
+
+ return &p->cpus_allowed;
+}
+
++static inline int tsk_nr_cpus_allowed(struct task_struct *p)
++{
++ if (__migrate_disabled(p))
++ return 1;
++ return p->nr_cpus_allowed;
++}
++
extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
@@ -140,27 +150,11 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* boot command line:
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -1164,6 +1164,15 @@ void set_cpus_allowed_common(struct task
- p->nr_cpus_allowed = cpumask_weight(new_mask);
- }
-
-+#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_SMP)
-+#define MIGRATE_DISABLE_SET_AFFIN (1<<30) /* Can't make a negative */
-+#define migrate_disabled_updated(p) ((p)->migrate_disable & MIGRATE_DISABLE_SET_AFFIN)
-+#define migrate_disable_count(p) ((p)->migrate_disable & ~MIGRATE_DISABLE_SET_AFFIN)
-+#else
-+static inline void update_migrate_disable(struct task_struct *p) { }
-+#define migrate_disabled_updated(p) 0
-+#endif
-+
- void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
- {
- struct rq *rq = task_rq(p);
-@@ -1171,6 +1180,11 @@ void do_set_cpus_allowed(struct task_str
+@@ -1171,6 +1171,11 @@ void do_set_cpus_allowed(struct task_str
lockdep_assert_held(&p->pi_lock);
-+ if (migrate_disabled_updated(p)) {
++ if (__migrate_disabled(p)) {
+ cpumask_copy(&p->cpus_allowed, new_mask);
+ return;
+ }
@@ -168,7 +162,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
queued = task_on_rq_queued(p);
running = task_current(rq, p);
-@@ -1232,7 +1246,7 @@ static int __set_cpus_allowed_ptr(struct
+@@ -1232,7 +1237,7 @@ static int __set_cpus_allowed_ptr(struct
do_set_cpus_allowed(p, new_mask);
/* Can the task run on the task's current CPU? If so, we're done */
@@ -177,44 +171,12 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
goto out;
dest_cpu = cpumask_any_and(cpu_active_mask, new_mask);
-@@ -3022,6 +3036,120 @@ static inline void schedule_debug(struct
+@@ -3022,6 +3027,70 @@ static inline void schedule_debug(struct
schedstat_inc(this_rq(), sched_count);
}
+#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_SMP)
+
-+static inline void update_migrate_disable(struct task_struct *p)
-+{
-+ const struct cpumask *mask;
-+
-+ if (likely(!p->migrate_disable))
-+ return;
-+
-+ /* Did we already update affinity? */
-+ if (unlikely(migrate_disabled_updated(p)))
-+ return;
-+
-+ /*
-+ * Since this is always current we can get away with only locking
-+ * rq->lock, the ->cpus_allowed value can normally only be changed
-+ * while holding both p->pi_lock and rq->lock, but seeing that this
-+ * is current, we cannot actually be waking up, so all code that
-+ * relies on serialization against p->pi_lock is out of scope.
-+ *
-+ * Having rq->lock serializes us against things like
-+ * set_cpus_allowed_ptr() that can still happen concurrently.
-+ */
-+ mask = tsk_cpus_allowed(p);
-+
-+ if (p->sched_class->set_cpus_allowed)
-+ p->sched_class->set_cpus_allowed(p, mask);
-+ /* mask==cpumask_of(task_cpu(p)) which has a cpumask_weight==1 */
-+ p->nr_cpus_allowed = 1;
-+
-+ /* Let migrate_enable know to fix things back up */
-+ p->migrate_disable |= MIGRATE_DISABLE_SET_AFFIN;
-+}
-+
+void migrate_disable(void)
+{
+ struct task_struct *p = current;
@@ -238,6 +200,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ preempt_disable();
+ pin_current_cpu();
+ p->migrate_disable = 1;
++ p->nr_cpus_allowed = 1;
+ preempt_enable();
+}
+EXPORT_SYMBOL(migrate_disable);
@@ -245,9 +208,6 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+void migrate_enable(void)
+{
+ struct task_struct *p = current;
-+ const struct cpumask *mask;
-+ unsigned long flags;
-+ struct rq *rq;
+
+ if (in_atomic() || p->flags & PF_NO_SETAFFINITY) {
+#ifdef CONFIG_SCHED_DEBUG
@@ -261,33 +221,17 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+#endif
+ WARN_ON_ONCE(p->migrate_disable <= 0);
+
-+ if (migrate_disable_count(p) > 1) {
++ if (p->migrate_disable > 1) {
+ p->migrate_disable--;
+ return;
+ }
+
+ preempt_disable();
-+ if (unlikely(migrate_disabled_updated(p))) {
-+ /*
-+ * Undo whatever update_migrate_disable() did, also see there
-+ * about locking.
-+ */
-+ rq = this_rq();
-+ raw_spin_lock_irqsave(&current->pi_lock, flags);
-+ raw_spin_lock(&rq->lock);
-+
-+ /*
-+ * Clearing migrate_disable causes tsk_cpus_allowed to
-+ * show the tasks original cpu affinity.
-+ */
-+ p->migrate_disable = 0;
-+ mask = tsk_cpus_allowed(p);
-+ do_set_cpus_allowed(p, mask);
-+
-+ raw_spin_unlock(&rq->lock);
-+ raw_spin_unlock_irqrestore(&current->pi_lock, flags);
-+ } else
-+ p->migrate_disable = 0;
++ /*
++ * Clearing migrate_disable causes tsk_cpus_allowed to
++ * show the tasks original cpu affinity.
++ */
++ p->migrate_disable = 0;
+
+ unpin_current_cpu();
+ preempt_enable();
@@ -298,15 +242,6 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Pick up the highest-prio task:
*/
-@@ -3137,6 +3265,8 @@ static void __sched notrace __schedule(b
- raw_spin_lock_irq(&rq->lock);
- lockdep_pin_lock(&rq->lock);
-
-+ update_migrate_disable(prev);
-+
- rq->clock_skip_update <<= 1; /* promote REQ to ACT */
-
- switch_count = &prev->nivcsw;
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -251,6 +251,9 @@ void print_rt_rq(struct seq_file *m, int
diff --git a/patches/ipc-msg-Implement-lockless-pipelined-wakeups.patch b/patches/ipc-msg-Implement-lockless-pipelined-wakeups.patch
index be023bfbf596ee..dcab30eacdb09f 100644
--- a/patches/ipc-msg-Implement-lockless-pipelined-wakeups.patch
+++ b/patches/ipc-msg-Implement-lockless-pipelined-wakeups.patch
@@ -1,7 +1,6 @@
-From 9a69dce752915917ecfe06a21f9c826c76f6eb07 Mon Sep 17 00:00:00 2001
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 30 Oct 2015 11:59:07 +0100
-Subject: [PATCH] ipc/msg: Implement lockless pipelined wakeups
+Subject: ipc/msg: Implement lockless pipelined wakeups
This patch moves the wakeup_process() invocation so it is not done under
the perm->lock by making use of a lockless wake_q. With this change, the
diff --git a/patches/kernel-SRCU-provide-a-static-initializer.patch b/patches/kernel-SRCU-provide-a-static-initializer.patch
index d63ac9a0f40f58..e73dad9895ba89 100644
--- a/patches/kernel-SRCU-provide-a-static-initializer.patch
+++ b/patches/kernel-SRCU-provide-a-static-initializer.patch
@@ -1,6 +1,6 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Tue, 19 Mar 2013 14:44:30 +0100
-Subject: [PATCH] kernel/SRCU: provide a static initializer
+Subject: kernel/SRCU: provide a static initializer
There are macros for static initializer for the three out of four
possible notifier types, that are:
diff --git a/patches/latency-hist.patch b/patches/latency-hist.patch
index f287de9c73f4f0..066943e6294d91 100644
--- a/patches/latency-hist.patch
+++ b/patches/latency-hist.patch
@@ -14,7 +14,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
Documentation/trace/histograms.txt | 186 +++++
- include/linux/hrtimer.h | 3
+ include/linux/hrtimer.h | 4
include/linux/sched.h | 6
include/trace/events/hist.h | 72 ++
include/trace/events/latency_hist.h | 29
@@ -23,7 +23,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
kernel/trace/Makefile | 4
kernel/trace/latency_hist.c | 1178 ++++++++++++++++++++++++++++++++++++
kernel/trace/trace_irqsoff.c | 11
- 10 files changed, 1614 insertions(+)
+ 10 files changed, 1615 insertions(+)
--- /dev/null
+++ b/Documentation/trace/histograms.txt
@@ -216,7 +216,15 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+These data are also reset when the wakeup histogram is reset.
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
-@@ -102,6 +102,9 @@ struct hrtimer {
+@@ -87,6 +87,7 @@ enum hrtimer_restart {
+ * @function: timer expiry callback function
+ * @base: pointer to the timer base (per cpu and per clock)
+ * @state: state information (See bit values above)
++ * @praecox: timer expiry time if expired at the time of programming
+ * @start_pid: timer statistics field to store the pid of the task which
+ * started the timer
+ * @start_site: timer statistics field to store the site where the timer
+@@ -102,6 +103,9 @@ struct hrtimer {
enum hrtimer_restart (*function)(struct hrtimer *);
struct hrtimer_clock_base *base;
unsigned long state;
@@ -375,7 +383,16 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
leftmost = enqueue_hrtimer(timer, new_base);
if (!leftmost)
goto unlock;
-@@ -1275,6 +1285,15 @@ static void __hrtimer_run_queues(struct
+@@ -1256,6 +1266,8 @@ static void __run_hrtimer(struct hrtimer
+ cpu_base->running = NULL;
+ }
+
++static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer);
++
+ static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
+ {
+ struct hrtimer_clock_base *base = cpu_base->clock_base;
+@@ -1275,6 +1287,15 @@ static void __hrtimer_run_queues(struct
timer = container_of(node, struct hrtimer, node);
@@ -391,15 +408,6 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* The immediate goal for using the softexpires is
* minimizing wakeups, not running timers at the
-@@ -1297,6 +1316,8 @@ static void __hrtimer_run_queues(struct
-
- #ifdef CONFIG_HIGH_RES_TIMERS
-
-+static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer);
-+
- /*
- * High resolution timer interrupt
- * Called with interrupts disabled
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -187,6 +187,24 @@ config IRQSOFF_TRACER
diff --git a/patches/localversion.patch b/patches/localversion.patch
index 24f89a4b90f490..37cf30feccba7f 100644
--- a/patches/localversion.patch
+++ b/patches/localversion.patch
@@ -1,4 +1,4 @@
-Subject: v4.4-rt2
+Subject: v4.4-rt3
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 08 Jul 2011 20:25:16 +0200
@@ -10,4 +10,4 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- /dev/null
+++ b/localversion-rt
@@ -0,0 +1 @@
-+-rt2
++-rt3
diff --git a/patches/mm-rmap-retry-lock-check-in-anon_vma_free.patch_vma_free.patch b/patches/mm-rmap-retry-lock-check-in-anon_vma_free.patch_vma_free.patch
deleted file mode 100644
index 4ea33c7c6bd0b2..00000000000000
--- a/patches/mm-rmap-retry-lock-check-in-anon_vma_free.patch_vma_free.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Tue, 1 Dec 2015 17:57:02 +0100
-Subject: mm/rmap: retry lock check in anon_vma_free()
-
-anon_vma_free() checks if the rwsem is locked and if so performs a
-rw lock + unlock operation. It seems the purpose is to flush the current
-reader out.
-From testing it seems that after the anon_vma_unlock_write() there is
-the rt_mutex's owner field has the waiter bit set. It does seem right to
-leave and kfree() that memory if there is still a waiter on that lock.
-The msleep() is there in case the anon_vma_free() caller has the highest
-priority and the waiter never gets scheduled.
-
-XXX
-
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
----
- mm/rmap.c | 12 +++++++++++-
- 1 file changed, 11 insertions(+), 1 deletion(-)
-
---- a/mm/rmap.c
-+++ b/mm/rmap.c
-@@ -89,8 +89,10 @@ static inline struct anon_vma *anon_vma_
- return anon_vma;
- }
-
--static inline void anon_vma_free(struct anon_vma *anon_vma)
-+#include <linux/delay.h>
-+static void anon_vma_free(struct anon_vma *anon_vma)
- {
-+ int cnt = 0;
- VM_BUG_ON(atomic_read(&anon_vma->refcount));
-
- /*
-@@ -111,9 +113,17 @@ static inline void anon_vma_free(struct
- * happen _before_ what follows.
- */
- might_sleep();
-+retry:
- if (rwsem_is_locked(&anon_vma->root->rwsem)) {
- anon_vma_lock_write(anon_vma);
- anon_vma_unlock_write(anon_vma);
-+
-+ if (rwsem_is_locked(&anon_vma->root->rwsem)) {
-+ cnt++;
-+ if (cnt > 3)
-+ msleep(1);
-+ }
-+ goto retry;
- }
-
- kmem_cache_free(anon_vma_cachep, anon_vma);
diff --git a/patches/net-another-local-irq-disable-alloc-atomic-headache.patch b/patches/net-another-local-irq-disable-alloc-atomic-headache.patch
index 19a6826d8af0ae..6180444570e50d 100644
--- a/patches/net-another-local-irq-disable-alloc-atomic-headache.patch
+++ b/patches/net-another-local-irq-disable-alloc-atomic-headache.patch
@@ -6,8 +6,8 @@ Replace it by a local lock. Though that's pretty inefficient :(
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
- net/core/skbuff.c | 6 ++++--
- 1 file changed, 4 insertions(+), 2 deletions(-)
+ net/core/skbuff.c | 10 ++++++----
+ 1 file changed, 6 insertions(+), 4 deletions(-)
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -40,3 +40,19 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
return data;
}
+@@ -427,13 +429,13 @@ struct sk_buff *__netdev_alloc_skb(struc
+ if (sk_memalloc_socks())
+ gfp_mask |= __GFP_MEMALLOC;
+
+- local_irq_save(flags);
++ local_lock_irqsave(netdev_alloc_lock, flags);
+
+ nc = this_cpu_ptr(&netdev_alloc_cache);
+ data = __alloc_page_frag(nc, len, gfp_mask);
+ pfmemalloc = nc->pfmemalloc;
+
+- local_irq_restore(flags);
++ local_unlock_irqrestore(netdev_alloc_lock, flags);
+
+ if (unlikely(!data))
+ return NULL;
diff --git a/patches/net-core-protect-users-of-napi_alloc_cache-against-r.patch b/patches/net-core-protect-users-of-napi_alloc_cache-against-r.patch
new file mode 100644
index 00000000000000..299ea20cf070de
--- /dev/null
+++ b/patches/net-core-protect-users-of-napi_alloc_cache-against-r.patch
@@ -0,0 +1,76 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Fri, 15 Jan 2016 16:33:34 +0100
+Subject: net/core: protect users of napi_alloc_cache against
+ reentrance
+
+On -RT the code running in BH can not be moved to another CPU so CPU
+local variable remain local. However the code can be preempted
+and another task may enter BH accessing the same CPU using the same
+napi_alloc_cache variable.
+This patch ensures that each user of napi_alloc_cache uses a local lock.
+
+Cc: stable-rt@vger.kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ net/core/skbuff.c | 18 ++++++++++++++----
+ 1 file changed, 14 insertions(+), 4 deletions(-)
+
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -351,6 +351,7 @@ EXPORT_SYMBOL(build_skb);
+ static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache);
+ static DEFINE_PER_CPU(struct page_frag_cache, napi_alloc_cache);
+ static DEFINE_LOCAL_IRQ_LOCK(netdev_alloc_lock);
++static DEFINE_LOCAL_IRQ_LOCK(napi_alloc_cache_lock);
+
+ static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
+ {
+@@ -380,9 +381,13 @@ EXPORT_SYMBOL(netdev_alloc_frag);
+
+ static void *__napi_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
+ {
+- struct page_frag_cache *nc = this_cpu_ptr(&napi_alloc_cache);
++ struct page_frag_cache *nc;
++ void *data;
+
+- return __alloc_page_frag(nc, fragsz, gfp_mask);
++ nc = &get_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
++ data = __alloc_page_frag(nc, fragsz, gfp_mask);
++ put_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
++ return data;
+ }
+
+ void *napi_alloc_frag(unsigned int fragsz)
+@@ -476,9 +481,10 @@ EXPORT_SYMBOL(__netdev_alloc_skb);
+ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
+ gfp_t gfp_mask)
+ {
+- struct page_frag_cache *nc = this_cpu_ptr(&napi_alloc_cache);
++ struct page_frag_cache *nc;
+ struct sk_buff *skb;
+ void *data;
++ bool pfmemalloc;
+
+ len += NET_SKB_PAD + NET_IP_ALIGN;
+
+@@ -496,7 +502,11 @@ struct sk_buff *__napi_alloc_skb(struct
+ if (sk_memalloc_socks())
+ gfp_mask |= __GFP_MEMALLOC;
+
++ nc = &get_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
+ data = __alloc_page_frag(nc, len, gfp_mask);
++ pfmemalloc = nc->pfmemalloc;
++ put_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
++
+ if (unlikely(!data))
+ return NULL;
+
+@@ -507,7 +517,7 @@ struct sk_buff *__napi_alloc_skb(struct
+ }
+
+ /* use OR instead of assignment to avoid clearing of bits in mask */
+- if (nc->pfmemalloc)
++ if (pfmemalloc)
+ skb->pfmemalloc = 1;
+ skb->head_frag = 1;
+
diff --git a/patches/net-move-xmit_recursion-to-per-task-variable-on-RT.patch b/patches/net-move-xmit_recursion-to-per-task-variable-on-RT.patch
new file mode 100644
index 00000000000000..cfce7cb0de64e6
--- /dev/null
+++ b/patches/net-move-xmit_recursion-to-per-task-variable-on-RT.patch
@@ -0,0 +1,125 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Wed, 13 Jan 2016 15:55:02 +0100
+Subject: net: move xmit_recursion to per-task variable on -RT
+
+A softirq on -RT can be preempted. That means one task is in
+__dev_queue_xmit(), gets preempted and another task may enter
+__dev_queue_xmit() aw well. netperf together with a bridge device
+will then trigger the `recursion alert` because each task increments
+the xmit_recursion variable which is per-CPU.
+A virtual device like br0 is required to trigger this warning.
+
+This patch moves the counter to per task instead per-CPU so it counts
+the recursion properly on -RT.
+
+Cc: stable-rt@vger.kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ include/linux/netdevice.h | 9 +++++++++
+ include/linux/sched.h | 3 +++
+ net/core/dev.c | 41 ++++++++++++++++++++++++++++++++++++++---
+ 3 files changed, 50 insertions(+), 3 deletions(-)
+
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2249,11 +2249,20 @@ void netdev_freemem(struct net_device *d
+ void synchronize_net(void);
+ int init_dummy_netdev(struct net_device *dev);
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static inline int dev_recursion_level(void)
++{
++ return current->xmit_recursion;
++}
++
++#else
++
+ DECLARE_PER_CPU(int, xmit_recursion);
+ static inline int dev_recursion_level(void)
+ {
+ return this_cpu_read(xmit_recursion);
+ }
++#endif
+
+ struct net_device *dev_get_by_index(struct net *net, int ifindex);
+ struct net_device *__dev_get_by_index(struct net *net, int ifindex);
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1851,6 +1851,9 @@ struct task_struct {
+ #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
+ unsigned long task_state_change;
+ #endif
++#ifdef CONFIG_PREEMPT_RT_FULL
++ int xmit_recursion;
++#endif
+ int pagefault_disabled;
+ /* CPU-specific state of this task */
+ struct thread_struct thread;
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2940,9 +2940,44 @@ static void skb_update_prio(struct sk_bu
+ #define skb_update_prio(skb)
+ #endif
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++
++static inline int xmit_rec_read(void)
++{
++ return current->xmit_recursion;
++}
++
++static inline void xmit_rec_inc(void)
++{
++ current->xmit_recursion++;
++}
++
++static inline void xmit_rec_dec(void)
++{
++ current->xmit_recursion--;
++}
++
++#else
++
+ DEFINE_PER_CPU(int, xmit_recursion);
+ EXPORT_SYMBOL(xmit_recursion);
+
++static inline int xmit_rec_read(void)
++{
++ return __this_cpu_read(xmit_recursion);
++}
++
++static inline void xmit_rec_inc(void)
++{
++ __this_cpu_inc(xmit_recursion);
++}
++
++static inline int xmit_rec_dec(void)
++{
++ __this_cpu_dec(xmit_recursion);
++}
++#endif
++
+ #define RECURSION_LIMIT 10
+
+ /**
+@@ -3135,7 +3170,7 @@ static int __dev_queue_xmit(struct sk_bu
+
+ if (txq->xmit_lock_owner != cpu) {
+
+- if (__this_cpu_read(xmit_recursion) > RECURSION_LIMIT)
++ if (xmit_rec_read() > RECURSION_LIMIT)
+ goto recursion_alert;
+
+ skb = validate_xmit_skb(skb, dev);
+@@ -3145,9 +3180,9 @@ static int __dev_queue_xmit(struct sk_bu
+ HARD_TX_LOCK(dev, txq, cpu);
+
+ if (!netif_xmit_stopped(txq)) {
+- __this_cpu_inc(xmit_recursion);
++ xmit_rec_inc();
+ skb = dev_hard_start_xmit(skb, dev, txq, &rc);
+- __this_cpu_dec(xmit_recursion);
++ xmit_rec_dec();
+ if (dev_xmit_complete(rc)) {
+ HARD_TX_UNLOCK(dev, txq);
+ goto out;
diff --git a/patches/net-provide-a-way-to-delegate-processing-a-softirq-t.patch b/patches/net-provide-a-way-to-delegate-processing-a-softirq-t.patch
new file mode 100644
index 00000000000000..cb9c1626b16dd0
--- /dev/null
+++ b/patches/net-provide-a-way-to-delegate-processing-a-softirq-t.patch
@@ -0,0 +1,78 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Wed, 20 Jan 2016 15:39:05 +0100
+Subject: net: provide a way to delegate processing a softirq to
+ ksoftirqd
+
+If the NET_RX uses up all of his budget it moves the following NAPI
+invocations into the `ksoftirqd`. On -RT it does not do so. Instead it
+rises the NET_RX softirq in its current context again.
+
+In order to get closer to mainline's behaviour this patch provides
+__raise_softirq_irqoff_ksoft() which raises the softirq in the ksoftird.
+
+Cc: stable-rt@vger.kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ include/linux/interrupt.h | 8 ++++++++
+ kernel/softirq.c | 21 +++++++++++++++++++++
+ net/core/dev.c | 2 +-
+ 3 files changed, 30 insertions(+), 1 deletion(-)
+
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -465,6 +465,14 @@ extern void thread_do_softirq(void);
+ extern void open_softirq(int nr, void (*action)(struct softirq_action *));
+ extern void softirq_init(void);
+ extern void __raise_softirq_irqoff(unsigned int nr);
++#ifdef CONFIG_PREEMPT_RT_FULL
++extern void __raise_softirq_irqoff_ksoft(unsigned int nr);
++#else
++static inline void __raise_softirq_irqoff_ksoft(unsigned int nr)
++{
++ __raise_softirq_irqoff(nr);
++}
++#endif
+
+ extern void raise_softirq_irqoff(unsigned int nr);
+ extern void raise_softirq(unsigned int nr);
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -673,6 +673,27 @@ void __raise_softirq_irqoff(unsigned int
+ }
+
+ /*
++ * Same as __raise_softirq_irqoff() but will process them in ksoftirqd
++ */
++void __raise_softirq_irqoff_ksoft(unsigned int nr)
++{
++ unsigned int mask;
++
++ if (WARN_ON_ONCE(!__this_cpu_read(ksoftirqd) ||
++ !__this_cpu_read(ktimer_softirqd)))
++ return;
++ mask = 1UL << nr;
++
++ trace_softirq_raise(nr);
++ or_softirq_pending(mask);
++ if (mask & TIMER_SOFTIRQS)
++ __this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
++ else
++ __this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
++ wakeup_proper_softirq(nr);
++}
++
++/*
+ * This function must run with irqs disabled!
+ */
+ void raise_softirq_irqoff(unsigned int nr)
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4919,7 +4919,7 @@ static void net_rx_action(struct softirq
+ list_splice_tail(&repoll, &list);
+ list_splice(&list, &sd->poll_list);
+ if (!list_empty(&sd->poll_list))
+- __raise_softirq_irqoff(NET_RX_SOFTIRQ);
++ __raise_softirq_irqoff_ksoft(NET_RX_SOFTIRQ);
+
+ net_rps_action_and_irq_enable(sd);
+ }
diff --git a/patches/net-tx-action-avoid-livelock-on-rt.patch b/patches/net-tx-action-avoid-livelock-on-rt.patch
index 0edb32ae153080..7af70bed73da18 100644
--- a/patches/net-tx-action-avoid-livelock-on-rt.patch
+++ b/patches/net-tx-action-avoid-livelock-on-rt.patch
@@ -44,7 +44,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/net/core/dev.c
+++ b/net/core/dev.c
-@@ -3598,6 +3598,36 @@ int netif_rx_ni(struct sk_buff *skb)
+@@ -3633,6 +3633,36 @@ int netif_rx_ni(struct sk_buff *skb)
}
EXPORT_SYMBOL(netif_rx_ni);
@@ -81,7 +81,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
static void net_tx_action(struct softirq_action *h)
{
struct softnet_data *sd = this_cpu_ptr(&softnet_data);
-@@ -3639,7 +3669,7 @@ static void net_tx_action(struct softirq
+@@ -3674,7 +3704,7 @@ static void net_tx_action(struct softirq
head = head->next_sched;
root_lock = qdisc_lock(q);
diff --git a/patches/preempt-lazy-check-preempt_schedule.patch b/patches/preempt-lazy-check-preempt_schedule.patch
new file mode 100644
index 00000000000000..e932f49e0f301f
--- /dev/null
+++ b/patches/preempt-lazy-check-preempt_schedule.patch
@@ -0,0 +1,73 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Wed, 20 Jan 2016 15:13:30 +0100
+Subject: preempt-lazy: Add the lazy-preemption check to preempt_schedule()
+
+Probably in the rebase onto v4.1 this check got moved into less commonly used
+preempt_schedule_notrace(). This patch ensures that both functions use it.
+
+Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ kernel/sched/core.c | 36 ++++++++++++++++++++++++++++--------
+ 1 file changed, 28 insertions(+), 8 deletions(-)
+
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -3461,6 +3461,30 @@ static void __sched notrace preempt_sche
+ } while (need_resched());
+ }
+
++#ifdef CONFIG_PREEMPT_LAZY
++/*
++ * If TIF_NEED_RESCHED is then we allow to be scheduled away since this is
++ * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
++ * preempt_lazy_count counter >0.
++ */
++static int preemptible_lazy(void)
++{
++ if (test_thread_flag(TIF_NEED_RESCHED))
++ return 1;
++ if (current_thread_info()->preempt_lazy_count)
++ return 0;
++ return 1;
++}
++
++#else
++
++static int preemptible_lazy(void)
++{
++ return 1;
++}
++
++#endif
++
+ #ifdef CONFIG_PREEMPT
+ /*
+ * this is the entry point to schedule() from in-kernel preemption
+@@ -3475,6 +3499,8 @@ asmlinkage __visible void __sched notrac
+ */
+ if (likely(!preemptible()))
+ return;
++ if (!preemptible_lazy())
++ return;
+
+ preempt_schedule_common();
+ }
+@@ -3501,15 +3527,9 @@ asmlinkage __visible void __sched notrac
+
+ if (likely(!preemptible()))
+ return;
+-
+-#ifdef CONFIG_PREEMPT_LAZY
+- /*
+- * Check for lazy preemption
+- */
+- if (current_thread_info()->preempt_lazy_count &&
+- !test_thread_flag(TIF_NEED_RESCHED))
++ if (!preemptible_lazy())
+ return;
+-#endif
++
+ do {
+ preempt_disable_notrace();
+ /*
diff --git a/patches/preempt-lazy-support.patch b/patches/preempt-lazy-support.patch
index 6a11d6905eade3..6179e8f6dad3dd 100644
--- a/patches/preempt-lazy-support.patch
+++ b/patches/preempt-lazy-support.patch
@@ -165,7 +165,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
-@@ -2963,6 +2963,43 @@ static inline int test_tsk_need_resched(
+@@ -2966,6 +2966,43 @@ static inline int test_tsk_need_resched(
return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED));
}
@@ -296,7 +296,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
void resched_cpu(int cpu)
{
struct rq *rq = cpu_rq(cpu);
-@@ -2353,6 +2385,9 @@ int sched_fork(unsigned long clone_flags
+@@ -2344,6 +2376,9 @@ int sched_fork(unsigned long clone_flags
p->on_cpu = 0;
#endif
init_task_preempt_count(p);
@@ -306,15 +306,15 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#ifdef CONFIG_SMP
plist_node_init(&p->pushable_tasks, MAX_PRIO);
RB_CLEAR_NODE(&p->pushable_dl_tasks);
-@@ -3183,6 +3218,7 @@ void migrate_disable(void)
+@@ -3142,6 +3177,7 @@ void migrate_disable(void)
}
preempt_disable();
+ preempt_lazy_disable();
pin_current_cpu();
p->migrate_disable = 1;
- preempt_enable();
-@@ -3241,6 +3277,7 @@ void migrate_enable(void)
+ p->nr_cpus_allowed = 1;
+@@ -3182,6 +3218,7 @@ void migrate_enable(void)
unpin_current_cpu();
preempt_enable();
@@ -322,7 +322,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
EXPORT_SYMBOL(migrate_enable);
#endif
-@@ -3380,6 +3417,7 @@ static void __sched notrace __schedule(b
+@@ -3319,6 +3356,7 @@ static void __sched notrace __schedule(b
next = pick_next_task(rq, prev);
clear_tsk_need_resched(prev);
@@ -330,7 +330,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
clear_preempt_need_resched();
rq->clock_skip_update = 0;
-@@ -3525,6 +3563,14 @@ asmlinkage __visible void __sched notrac
+@@ -3464,6 +3502,14 @@ asmlinkage __visible void __sched notrac
if (likely(!preemptible()))
return;
@@ -345,7 +345,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
do {
preempt_disable_notrace();
/*
-@@ -5265,7 +5311,9 @@ void init_idle(struct task_struct *idle,
+@@ -5204,7 +5250,9 @@ void init_idle(struct task_struct *idle,
/* Set the preempt count _outside_ the spinlocks! */
init_idle_preempt_count(idle, cpu);
diff --git a/patches/ptrace-don-t-open-IRQs-in-ptrace_freeze_traced-too-e.patch b/patches/ptrace-don-t-open-IRQs-in-ptrace_freeze_traced-too-e.patch
new file mode 100644
index 00000000000000..413a1077b48334
--- /dev/null
+++ b/patches/ptrace-don-t-open-IRQs-in-ptrace_freeze_traced-too-e.patch
@@ -0,0 +1,34 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Wed, 13 Jan 2016 14:09:05 +0100
+Subject: ptrace: don't open IRQs in ptrace_freeze_traced() too early
+
+In the non-RT case the spin_lock_irq() here disables interrupts as well
+as raw_spin_lock_irq(). So in the unlock case the interrupts are enabled
+too early.
+
+Reported-by: kernel test robot <ying.huang@linux.intel.com>
+Cc: stable-rt@vger.kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ kernel/ptrace.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -129,12 +129,14 @@ static bool ptrace_freeze_traced(struct
+
+ spin_lock_irq(&task->sighand->siglock);
+ if (task_is_traced(task) && !__fatal_signal_pending(task)) {
+- raw_spin_lock_irq(&task->pi_lock);
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&task->pi_lock, flags);
+ if (task->state & __TASK_TRACED)
+ task->state = __TASK_TRACED;
+ else
+ task->saved_state = __TASK_TRACED;
+- raw_spin_unlock_irq(&task->pi_lock);
++ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ ret = true;
+ }
+ spin_unlock_irq(&task->sighand->siglock);
diff --git a/patches/ptrace-fix-ptrace-vs-tasklist_lock-race.patch b/patches/ptrace-fix-ptrace-vs-tasklist_lock-race.patch
index 1da7a69886d280..71bf06166caa5d 100644
--- a/patches/ptrace-fix-ptrace-vs-tasklist_lock-race.patch
+++ b/patches/ptrace-fix-ptrace-vs-tasklist_lock-race.patch
@@ -111,7 +111,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
spin_unlock_irq(&task->sighand->siglock);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -1435,6 +1435,18 @@ int migrate_swap(struct task_struct *cur
+@@ -1426,6 +1426,18 @@ int migrate_swap(struct task_struct *cur
return ret;
}
@@ -130,7 +130,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* wait_task_inactive - wait for a thread to unschedule.
*
-@@ -1479,7 +1491,7 @@ unsigned long wait_task_inactive(struct
+@@ -1470,7 +1482,7 @@ unsigned long wait_task_inactive(struct
* is actually now running somewhere else!
*/
while (task_running(rq, p)) {
@@ -139,7 +139,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
return 0;
cpu_relax();
}
-@@ -1494,7 +1506,8 @@ unsigned long wait_task_inactive(struct
+@@ -1485,7 +1497,8 @@ unsigned long wait_task_inactive(struct
running = task_running(rq, p);
queued = task_on_rq_queued(p);
ncsw = 0;
diff --git a/patches/rcu-make-RCU_BOOST-default-on-RT.patch b/patches/rcu-make-RCU_BOOST-default-on-RT.patch
index 72d8a1dd7df81d..4abde47d1286f6 100644
--- a/patches/rcu-make-RCU_BOOST-default-on-RT.patch
+++ b/patches/rcu-make-RCU_BOOST-default-on-RT.patch
@@ -7,14 +7,22 @@ often if the priority of the RCU thread is too low. Making boosting
default on RT should help in those case and it can be switched off if
someone knows better.
-
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
- init/Kconfig | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
+ init/Kconfig | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/init/Kconfig
+++ b/init/Kconfig
+@@ -498,7 +498,7 @@ config TINY_RCU
+
+ config RCU_EXPERT
+ bool "Make expert-level adjustments to RCU configuration"
+- default n
++ default y if PREEMPT_RT_FULL
+ help
+ This option needs to be enabled if you wish to make
+ expert-level adjustments to RCU configuration. By default,
@@ -641,7 +641,7 @@ config TREE_RCU_TRACE
config RCU_BOOST
bool "Enable RCU priority boosting"
diff --git a/patches/rt-introduce-cpu-chill.patch b/patches/rt-introduce-cpu-chill.patch
index c802a82edc9783..4d0e3fe954b329 100644
--- a/patches/rt-introduce-cpu-chill.patch
+++ b/patches/rt-introduce-cpu-chill.patch
@@ -100,7 +100,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
#endif /* defined(_LINUX_DELAY_H) */
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
-@@ -1776,6 +1776,25 @@ SYSCALL_DEFINE2(nanosleep, struct timesp
+@@ -1775,6 +1775,25 @@ SYSCALL_DEFINE2(nanosleep, struct timesp
return hrtimer_nanosleep(&tu, rmtp, HRTIMER_MODE_REL, CLOCK_MONOTONIC);
}
diff --git a/patches/rtmutex-Use-chainwalking-control-enum.patch b/patches/rtmutex-Use-chainwalking-control-enum.patch
index dcd12f08db9451..322db42d54b044 100644
--- a/patches/rtmutex-Use-chainwalking-control-enum.patch
+++ b/patches/rtmutex-Use-chainwalking-control-enum.patch
@@ -1,7 +1,6 @@
-From 13f032043086194982ac91c68124adae545f5627 Mon Sep 17 00:00:00 2001
From: "bmouring@ni.com" <bmouring@ni.com>
Date: Tue, 15 Dec 2015 17:07:30 -0600
-Subject: [PATCH] rtmutex: Use chainwalking control enum
+Subject: rtmutex: Use chainwalking control enum
In 8930ed80 (rtmutex: Cleanup deadlock detector debug logic),
chainwalking control enums were introduced to limit the deadlock
diff --git a/patches/sched-might-sleep-do-not-account-rcu-depth.patch b/patches/sched-might-sleep-do-not-account-rcu-depth.patch
index e5539c5fa33802..32015ea01dc216 100644
--- a/patches/sched-might-sleep-do-not-account-rcu-depth.patch
+++ b/patches/sched-might-sleep-do-not-account-rcu-depth.patch
@@ -36,7 +36,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* Internal to kernel */
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -7729,7 +7729,7 @@ void __init sched_init(void)
+@@ -7668,7 +7668,7 @@ void __init sched_init(void)
#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
static inline int preempt_count_equals(int preempt_offset)
{
diff --git a/patches/sched-mmdrop-delayed.patch b/patches/sched-mmdrop-delayed.patch
index ea0c9073b65a3f..b43090c37f6050 100644
--- a/patches/sched-mmdrop-delayed.patch
+++ b/patches/sched-mmdrop-delayed.patch
@@ -84,7 +84,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
*/
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -2602,8 +2602,12 @@ static struct rq *finish_task_switch(str
+@@ -2593,8 +2593,12 @@ static struct rq *finish_task_switch(str
finish_arch_post_lock_switch();
fire_sched_in_preempt_notifiers(current);
@@ -98,7 +98,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
if (unlikely(prev_state == TASK_DEAD)) {
if (prev->sched_class->task_dead)
prev->sched_class->task_dead(prev);
-@@ -5317,6 +5321,8 @@ void sched_setnuma(struct task_struct *p
+@@ -5256,6 +5260,8 @@ void sched_setnuma(struct task_struct *p
#endif /* CONFIG_NUMA_BALANCING */
#ifdef CONFIG_HOTPLUG_CPU
@@ -107,7 +107,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Ensures that the idle task is using init_mm right before its cpu goes
* offline.
-@@ -5331,7 +5337,11 @@ void idle_task_exit(void)
+@@ -5270,7 +5276,11 @@ void idle_task_exit(void)
switch_mm(mm, &init_mm, current);
finish_arch_post_lock_switch();
}
@@ -120,7 +120,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
/*
-@@ -5703,6 +5713,10 @@ migration_call(struct notifier_block *nf
+@@ -5642,6 +5652,10 @@ migration_call(struct notifier_block *nf
case CPU_DEAD:
calc_load_migrate(rq);
diff --git a/patches/sched-provide-a-tsk_nr_cpus_allowed-helper.patch b/patches/sched-provide-a-tsk_nr_cpus_allowed-helper.patch
new file mode 100644
index 00000000000000..27a821eb26d902
--- /dev/null
+++ b/patches/sched-provide-a-tsk_nr_cpus_allowed-helper.patch
@@ -0,0 +1,261 @@
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Mon, 18 Jan 2016 17:21:59 +0100
+Subject: sched: provide a tsk_nr_cpus_allowed() helper
+
+tsk_nr_cpus_allowed() is an accessor for task->nr_cpus_allowed which allows
+us to change the representation of ->nr_cpus_allowed if required.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ include/linux/sched.h | 5 +++++
+ kernel/sched/core.c | 2 +-
+ kernel/sched/deadline.c | 28 ++++++++++++++--------------
+ kernel/sched/rt.c | 24 ++++++++++++------------
+ 4 files changed, 32 insertions(+), 27 deletions(-)
+
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1832,6 +1832,11 @@ extern int arch_task_struct_size __read_
+ /* Future-safe accessor for struct task_struct's cpus_allowed. */
+ #define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed)
+
++static inline int tsk_nr_cpus_allowed(struct task_struct *p)
++{
++ return p->nr_cpus_allowed;
++}
++
+ #define TNF_MIGRATED 0x01
+ #define TNF_NO_GROUP 0x02
+ #define TNF_SHARED 0x04
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1624,7 +1624,7 @@ int select_task_rq(struct task_struct *p
+ {
+ lockdep_assert_held(&p->pi_lock);
+
+- if (p->nr_cpus_allowed > 1)
++ if (tsk_nr_cpus_allowed(p) > 1)
+ cpu = p->sched_class->select_task_rq(p, cpu, sd_flags, wake_flags);
+
+ /*
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -134,7 +134,7 @@ static void inc_dl_migration(struct sche
+ {
+ struct task_struct *p = dl_task_of(dl_se);
+
+- if (p->nr_cpus_allowed > 1)
++ if (tsk_nr_cpus_allowed(p) > 1)
+ dl_rq->dl_nr_migratory++;
+
+ update_dl_migration(dl_rq);
+@@ -144,7 +144,7 @@ static void dec_dl_migration(struct sche
+ {
+ struct task_struct *p = dl_task_of(dl_se);
+
+- if (p->nr_cpus_allowed > 1)
++ if (tsk_nr_cpus_allowed(p) > 1)
+ dl_rq->dl_nr_migratory--;
+
+ update_dl_migration(dl_rq);
+@@ -989,7 +989,7 @@ static void enqueue_task_dl(struct rq *r
+
+ enqueue_dl_entity(&p->dl, pi_se, flags);
+
+- if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
++ if (!task_current(rq, p) && tsk_nr_cpus_allowed(p) > 1)
+ enqueue_pushable_dl_task(rq, p);
+ }
+
+@@ -1067,9 +1067,9 @@ select_task_rq_dl(struct task_struct *p,
+ * try to make it stay here, it might be important.
+ */
+ if (unlikely(dl_task(curr)) &&
+- (curr->nr_cpus_allowed < 2 ||
++ (tsk_nr_cpus_allowed(curr) < 2 ||
+ !dl_entity_preempt(&p->dl, &curr->dl)) &&
+- (p->nr_cpus_allowed > 1)) {
++ (tsk_nr_cpus_allowed(p) > 1)) {
+ int target = find_later_rq(p);
+
+ if (target != -1 &&
+@@ -1090,7 +1090,7 @@ static void check_preempt_equal_dl(struc
+ * Current can't be migrated, useless to reschedule,
+ * let's hope p can move out.
+ */
+- if (rq->curr->nr_cpus_allowed == 1 ||
++ if (tsk_nr_cpus_allowed(rq->curr) == 1 ||
+ cpudl_find(&rq->rd->cpudl, rq->curr, NULL) == -1)
+ return;
+
+@@ -1098,7 +1098,7 @@ static void check_preempt_equal_dl(struc
+ * p is migratable, so let's not schedule it and
+ * see if it is pushed or pulled somewhere else.
+ */
+- if (p->nr_cpus_allowed != 1 &&
++ if (tsk_nr_cpus_allowed(p) != 1 &&
+ cpudl_find(&rq->rd->cpudl, p, NULL) != -1)
+ return;
+
+@@ -1212,7 +1212,7 @@ static void put_prev_task_dl(struct rq *
+ {
+ update_curr_dl(rq);
+
+- if (on_dl_rq(&p->dl) && p->nr_cpus_allowed > 1)
++ if (on_dl_rq(&p->dl) && tsk_nr_cpus_allowed(p) > 1)
+ enqueue_pushable_dl_task(rq, p);
+ }
+
+@@ -1335,7 +1335,7 @@ static int find_later_rq(struct task_str
+ if (unlikely(!later_mask))
+ return -1;
+
+- if (task->nr_cpus_allowed == 1)
++ if (tsk_nr_cpus_allowed(task) == 1)
+ return -1;
+
+ /*
+@@ -1480,7 +1480,7 @@ static struct task_struct *pick_next_pus
+
+ BUG_ON(rq->cpu != task_cpu(p));
+ BUG_ON(task_current(rq, p));
+- BUG_ON(p->nr_cpus_allowed <= 1);
++ BUG_ON(tsk_nr_cpus_allowed(p) <= 1);
+
+ BUG_ON(!task_on_rq_queued(p));
+ BUG_ON(!dl_task(p));
+@@ -1519,7 +1519,7 @@ static int push_dl_task(struct rq *rq)
+ */
+ if (dl_task(rq->curr) &&
+ dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) &&
+- rq->curr->nr_cpus_allowed > 1) {
++ tsk_nr_cpus_allowed(rq->curr) > 1) {
+ resched_curr(rq);
+ return 0;
+ }
+@@ -1666,9 +1666,9 @@ static void task_woken_dl(struct rq *rq,
+ {
+ if (!task_running(rq, p) &&
+ !test_tsk_need_resched(rq->curr) &&
+- p->nr_cpus_allowed > 1 &&
++ tsk_nr_cpus_allowed(p) > 1 &&
+ dl_task(rq->curr) &&
+- (rq->curr->nr_cpus_allowed < 2 ||
++ (tsk_nr_cpus_allowed(rq->curr) < 2 ||
+ !dl_entity_preempt(&p->dl, &rq->curr->dl))) {
+ push_dl_tasks(rq);
+ }
+@@ -1769,7 +1769,7 @@ static void switched_to_dl(struct rq *rq
+ {
+ if (task_on_rq_queued(p) && rq->curr != p) {
+ #ifdef CONFIG_SMP
+- if (p->nr_cpus_allowed > 1 && rq->dl.overloaded)
++ if (tsk_nr_cpus_allowed(p) > 1 && rq->dl.overloaded)
+ queue_push_tasks(rq);
+ #else
+ if (dl_task(rq->curr))
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -326,7 +326,7 @@ static void inc_rt_migration(struct sche
+ rt_rq = &rq_of_rt_rq(rt_rq)->rt;
+
+ rt_rq->rt_nr_total++;
+- if (p->nr_cpus_allowed > 1)
++ if (tsk_nr_cpus_allowed(p) > 1)
+ rt_rq->rt_nr_migratory++;
+
+ update_rt_migration(rt_rq);
+@@ -343,7 +343,7 @@ static void dec_rt_migration(struct sche
+ rt_rq = &rq_of_rt_rq(rt_rq)->rt;
+
+ rt_rq->rt_nr_total--;
+- if (p->nr_cpus_allowed > 1)
++ if (tsk_nr_cpus_allowed(p) > 1)
+ rt_rq->rt_nr_migratory--;
+
+ update_rt_migration(rt_rq);
+@@ -1262,7 +1262,7 @@ enqueue_task_rt(struct rq *rq, struct ta
+
+ enqueue_rt_entity(rt_se, flags & ENQUEUE_HEAD);
+
+- if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
++ if (!task_current(rq, p) && tsk_nr_cpus_allowed(p) > 1)
+ enqueue_pushable_task(rq, p);
+ }
+
+@@ -1351,7 +1351,7 @@ select_task_rq_rt(struct task_struct *p,
+ * will have to sort it out.
+ */
+ if (curr && unlikely(rt_task(curr)) &&
+- (curr->nr_cpus_allowed < 2 ||
++ (tsk_nr_cpus_allowed(curr) < 2 ||
+ curr->prio <= p->prio)) {
+ int target = find_lowest_rq(p);
+
+@@ -1375,7 +1375,7 @@ static void check_preempt_equal_prio(str
+ * Current can't be migrated, useless to reschedule,
+ * let's hope p can move out.
+ */
+- if (rq->curr->nr_cpus_allowed == 1 ||
++ if (tsk_nr_cpus_allowed(rq->curr) == 1 ||
+ !cpupri_find(&rq->rd->cpupri, rq->curr, NULL))
+ return;
+
+@@ -1383,7 +1383,7 @@ static void check_preempt_equal_prio(str
+ * p is migratable, so let's not schedule it and
+ * see if it is pushed or pulled somewhere else.
+ */
+- if (p->nr_cpus_allowed != 1
++ if (tsk_nr_cpus_allowed(p) != 1
+ && cpupri_find(&rq->rd->cpupri, p, NULL))
+ return;
+
+@@ -1517,7 +1517,7 @@ static void put_prev_task_rt(struct rq *
+ * The previous task needs to be made eligible for pushing
+ * if it is still active
+ */
+- if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
++ if (on_rt_rq(&p->rt) && tsk_nr_cpus_allowed(p) > 1)
+ enqueue_pushable_task(rq, p);
+ }
+
+@@ -1567,7 +1567,7 @@ static int find_lowest_rq(struct task_st
+ if (unlikely(!lowest_mask))
+ return -1;
+
+- if (task->nr_cpus_allowed == 1)
++ if (tsk_nr_cpus_allowed(task) == 1)
+ return -1; /* No other targets possible */
+
+ if (!cpupri_find(&task_rq(task)->rd->cpupri, task, lowest_mask))
+@@ -1699,7 +1699,7 @@ static struct task_struct *pick_next_pus
+
+ BUG_ON(rq->cpu != task_cpu(p));
+ BUG_ON(task_current(rq, p));
+- BUG_ON(p->nr_cpus_allowed <= 1);
++ BUG_ON(tsk_nr_cpus_allowed(p) <= 1);
+
+ BUG_ON(!task_on_rq_queued(p));
+ BUG_ON(!rt_task(p));
+@@ -2059,9 +2059,9 @@ static void task_woken_rt(struct rq *rq,
+ {
+ if (!task_running(rq, p) &&
+ !test_tsk_need_resched(rq->curr) &&
+- p->nr_cpus_allowed > 1 &&
++ tsk_nr_cpus_allowed(p) > 1 &&
+ (dl_task(rq->curr) || rt_task(rq->curr)) &&
+- (rq->curr->nr_cpus_allowed < 2 ||
++ (tsk_nr_cpus_allowed(rq->curr) < 2 ||
+ rq->curr->prio <= p->prio))
+ push_rt_tasks(rq);
+ }
+@@ -2134,7 +2134,7 @@ static void switched_to_rt(struct rq *rq
+ */
+ if (task_on_rq_queued(p) && rq->curr != p) {
+ #ifdef CONFIG_SMP
+- if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
++ if (tsk_nr_cpus_allowed(p) > 1 && rq->rt.overloaded)
+ queue_push_tasks(rq);
+ #else
+ if (p->prio < rq->curr->prio)
diff --git a/patches/sched-rt-mutex-wakeup.patch b/patches/sched-rt-mutex-wakeup.patch
index 164b3c5a9586f8..fa8dd66ee8ca42 100644
--- a/patches/sched-rt-mutex-wakeup.patch
+++ b/patches/sched-rt-mutex-wakeup.patch
@@ -35,7 +35,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
extern void kick_process(struct task_struct *tsk);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -1958,8 +1958,25 @@ try_to_wake_up(struct task_struct *p, un
+@@ -1949,8 +1949,25 @@ try_to_wake_up(struct task_struct *p, un
*/
smp_mb__before_spinlock();
raw_spin_lock_irqsave(&p->pi_lock, flags);
@@ -62,7 +62,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
trace_sched_waking(p);
-@@ -2092,6 +2109,18 @@ int wake_up_process(struct task_struct *
+@@ -2083,6 +2100,18 @@ int wake_up_process(struct task_struct *
}
EXPORT_SYMBOL(wake_up_process);
diff --git a/patches/sched-ttwu-ensure-success-return-is-correct.patch b/patches/sched-ttwu-ensure-success-return-is-correct.patch
index 5c1d0b3a5f7b81..06d58b4c333738 100644
--- a/patches/sched-ttwu-ensure-success-return-is-correct.patch
+++ b/patches/sched-ttwu-ensure-success-return-is-correct.patch
@@ -20,7 +20,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -1965,8 +1965,10 @@ try_to_wake_up(struct task_struct *p, un
+@@ -1956,8 +1956,10 @@ try_to_wake_up(struct task_struct *p, un
* if the wakeup condition is true.
*/
if (!(wake_flags & WF_LOCK_SLEEPER)) {
diff --git a/patches/sched-use-tsk_cpus_allowed-instead-of-accessing-cpus.patch b/patches/sched-use-tsk_cpus_allowed-instead-of-accessing-cpus.patch
new file mode 100644
index 00000000000000..fa5eb8ef713f30
--- /dev/null
+++ b/patches/sched-use-tsk_cpus_allowed-instead-of-accessing-cpus.patch
@@ -0,0 +1,57 @@
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Mon, 18 Jan 2016 17:10:39 +0100
+Subject: sched: use tsk_cpus_allowed() instead of accessing
+ ->cpus_allowed
+
+Use the future-safe accessor for struct task_struct's.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ kernel/sched/cpudeadline.c | 4 ++--
+ kernel/sched/cpupri.c | 4 ++--
+ kernel/sched/deadline.c | 2 +-
+ 3 files changed, 5 insertions(+), 5 deletions(-)
+
+--- a/kernel/sched/cpudeadline.c
++++ b/kernel/sched/cpudeadline.c
+@@ -103,10 +103,10 @@ int cpudl_find(struct cpudl *cp, struct
+ const struct sched_dl_entity *dl_se = &p->dl;
+
+ if (later_mask &&
+- cpumask_and(later_mask, cp->free_cpus, &p->cpus_allowed)) {
++ cpumask_and(later_mask, cp->free_cpus, tsk_cpus_allowed(p))) {
+ best_cpu = cpumask_any(later_mask);
+ goto out;
+- } else if (cpumask_test_cpu(cpudl_maximum(cp), &p->cpus_allowed) &&
++ } else if (cpumask_test_cpu(cpudl_maximum(cp), tsk_cpus_allowed(p)) &&
+ dl_time_before(dl_se->deadline, cp->elements[0].dl)) {
+ best_cpu = cpudl_maximum(cp);
+ if (later_mask)
+--- a/kernel/sched/cpupri.c
++++ b/kernel/sched/cpupri.c
+@@ -103,11 +103,11 @@ int cpupri_find(struct cpupri *cp, struc
+ if (skip)
+ continue;
+
+- if (cpumask_any_and(&p->cpus_allowed, vec->mask) >= nr_cpu_ids)
++ if (cpumask_any_and(tsk_cpus_allowed(p), vec->mask) >= nr_cpu_ids)
+ continue;
+
+ if (lowest_mask) {
+- cpumask_and(lowest_mask, &p->cpus_allowed, vec->mask);
++ cpumask_and(lowest_mask, tsk_cpus_allowed(p), vec->mask);
+
+ /*
+ * We have to ensure that we have at least one bit
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1441,7 +1441,7 @@ static struct rq *find_lock_later_rq(str
+ if (double_lock_balance(rq, later_rq)) {
+ if (unlikely(task_rq(task) != rq ||
+ !cpumask_test_cpu(later_rq->cpu,
+- &task->cpus_allowed) ||
++ tsk_cpus_allowed(task)) ||
+ task_running(rq, task) ||
+ !task_on_rq_queued(task))) {
+ double_unlock_balance(rq, later_rq);
diff --git a/patches/sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch b/patches/sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch
index d7f676b5b7d324..0bcde6e4e96ae2 100644
--- a/patches/sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch
+++ b/patches/sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch
@@ -23,7 +23,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -3326,8 +3326,10 @@ static void __sched notrace __schedule(b
+@@ -3265,8 +3265,10 @@ static void __sched notrace __schedule(b
* If a worker went to sleep, notify and ask workqueue
* whether it wants to wake up a task to maintain
* concurrency.
diff --git a/patches/series b/patches/series
index 9e9f18f91daccb..1d6c181dddb63e 100644
--- a/patches/series
+++ b/patches/series
@@ -13,6 +13,11 @@
############################################################
# Stuff broken upstream, patches submitted
############################################################
+btrfs-initialize-the-seq-counter-in-struct-btrfs_dev.patch
+sched-use-tsk_cpus_allowed-instead-of-accessing-cpus.patch
+sched-provide-a-tsk_nr_cpus_allowed-helper.patch
+drivers-cpuidle-coupled-fix-warning-cpuidle_coupled_.patch
+drivers-media-vsp1_video-fix-compile-error.patch
############################################################
# Stuff which needs addressing upstream, but requires more
@@ -63,7 +68,6 @@ kernel-SRCU-provide-a-static-initializer.patch
############################################################
# Stuff which should go upstream ASAP
############################################################
-0009-ARM-OMAP2-Drop-the-concept-of-certain-power-domains-.patch
# SCHED BLOCK/WQ
block-shorten-interrupt-disabled-regions.patch
@@ -135,6 +139,8 @@ pci-access-use-__wake_up_all_locked.patch
# TRACING
latency-hist.patch
+latency_hist-update-sched_wakeup-probe.patch
+trace-latency-hist-Consider-new-argument-when-probin.patch
# HW LATENCY DETECTOR - this really wants a rewrite
hwlatdetect.patch
@@ -227,7 +233,6 @@ slub-disable-SLUB_CPU_PARTIAL.patch
mm-page-alloc-use-local-lock-on-target-cpu.patch
mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch
mm-memcontrol-do_not_disable_irq.patch
-mm-rmap-retry-lock-check-in-anon_vma_free.patch_vma_free.patch
# RADIX TREE
radix-tree-rt-aware.patch
@@ -294,6 +299,7 @@ softirq-preempt-fix-3-re.patch
softirq-disable-softirq-stacks-for-rt.patch
softirq-split-locks.patch
irq-allow-disabling-of-softirq-processing-in-irq-thread-context.patch
+softirq-split-timer-softirqs-out-of-ksoftirqd.patch
rtmutex-trylock-is-okay-on-RT.patch
# RAID5
@@ -319,6 +325,8 @@ ptrace-fix-ptrace-vs-tasklist_lock-race.patch
# RTMUTEX Fallout
tasklist-lock-fix-section-conflict.patch
+#fold
+ptrace-don-t-open-IRQs-in-ptrace_freeze_traced-too-e.patch
# RCU
peter_zijlstra-frob-rcu.patch
@@ -407,6 +415,14 @@ sunrpc-make-svc_xprt_do_enqueue-use-get_cpu_light.patch
net__Make_synchronize-rcu_expedited_conditional-on-non-rt
skbufhead-raw-lock.patch
net-core-cpuhotplug-drain-input_pkt_queue-lockless.patch
+net-move-xmit_recursion-to-per-task-variable-on-RT.patch
+net-provide-a-way-to-delegate-processing-a-softirq-t.patch
+
+# NETWORK livelock fix
+net-tx-action-avoid-livelock-on-rt.patch
+
+# NETWORK DEBUGGING AID
+ping-sysrq.patch
# irqwork
irqwork-push_most_work_into_softirq_context.patch
@@ -432,12 +448,6 @@ ARM-enable-irq-in-translation-section-permission-fau.patch
# ARM64
arm64-xen--Make-XEN-depend-on-non-rt.patch
-# NETWORK livelock fix
-net-tx-action-avoid-livelock-on-rt.patch
-
-# NETWORK DEBUGGING AID
-ping-sysrq.patch
-
# KGDB
kgb-serial-hackaround.patch
@@ -501,6 +511,7 @@ scsi-qla2xxx-fix-bug-sleeping-function-called-from-invalid-context.patch
# NET
upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch
net-another-local-irq-disable-alloc-atomic-headache.patch
+net-core-protect-users-of-napi_alloc_cache-against-r.patch
net-fix-iptable-xt-write-recseq-begin-rt-fallout.patch
net-make-devnet_rename_seq-a-mutex.patch
@@ -521,10 +532,12 @@ rcu-make-RCU_BOOST-default-on-RT.patch
# PREEMPT LAZY
preempt-lazy-support.patch
+preempt-lazy-check-preempt_schedule.patch
x86-preempt-lazy.patch
arm-preempt-lazy-support.patch
powerpc-preempt-lazy-support.patch
arch-arm64-Add-lazy-preempt-support.patch
+arm-arm64-lazy-preempt-add-TIF_NEED_RESCHED_LAZY-to-.patch
# LEDS
leds-trigger-disable-CPU-trigger-on-RT.patch
@@ -553,8 +566,5 @@ md-disable-bcache.patch
# WORKQUEUE SIGH
workqueue-prevent-deadlock-stall.patch
-# TRACING
-latency_hist-update-sched_wakeup-probe.patch
-
# Add RT to version
localversion.patch
diff --git a/patches/softirq-disable-softirq-stacks-for-rt.patch b/patches/softirq-disable-softirq-stacks-for-rt.patch
index af38648537fa1d..b4eeadf2c26825 100644
--- a/patches/softirq-disable-softirq-stacks-for-rt.patch
+++ b/patches/softirq-disable-softirq-stacks-for-rt.patch
@@ -145,7 +145,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
{
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
-@@ -446,7 +446,7 @@ struct softirq_action
+@@ -447,7 +447,7 @@ struct softirq_action
asmlinkage void do_softirq(void);
asmlinkage void __do_softirq(void);
diff --git a/patches/softirq-split-locks.patch b/patches/softirq-split-locks.patch
index 31e499340ca4b1..6527190fc5d36b 100644
--- a/patches/softirq-split-locks.patch
+++ b/patches/softirq-split-locks.patch
@@ -85,7 +85,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#endif /* _LINUX_BH_H */
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
-@@ -443,10 +443,11 @@ struct softirq_action
+@@ -444,10 +444,11 @@ struct softirq_action
void (*action)(struct softirq_action *);
};
@@ -99,7 +99,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
void do_softirq_own_stack(void);
#else
static inline void do_softirq_own_stack(void)
-@@ -454,6 +455,9 @@ static inline void do_softirq_own_stack(
+@@ -455,6 +456,9 @@ static inline void do_softirq_own_stack(
__do_softirq();
}
#endif
@@ -109,7 +109,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
extern void open_softirq(int nr, void (*action)(struct softirq_action *));
extern void softirq_init(void);
-@@ -461,6 +465,7 @@ extern void __raise_softirq_irqoff(unsig
+@@ -462,6 +466,7 @@ extern void __raise_softirq_irqoff(unsig
extern void raise_softirq_irqoff(unsigned int nr);
extern void raise_softirq(unsigned int nr);
@@ -117,7 +117,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
-@@ -618,6 +623,12 @@ void tasklet_hrtimer_cancel(struct taskl
+@@ -619,6 +624,12 @@ void tasklet_hrtimer_cancel(struct taskl
tasklet_kill(&ttimer->tasklet);
}
@@ -575,8 +575,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+
+ do_current_softirqs();
+ current->softirq_nestcnt--;
-+ rcu_note_context_switch();
+ local_irq_enable();
++ cond_resched_rcu_qs();
+}
+
+/*
diff --git a/patches/softirq-split-timer-softirqs-out-of-ksoftirqd.patch b/patches/softirq-split-timer-softirqs-out-of-ksoftirqd.patch
new file mode 100644
index 00000000000000..213b6e0b4a57f3
--- /dev/null
+++ b/patches/softirq-split-timer-softirqs-out-of-ksoftirqd.patch
@@ -0,0 +1,207 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Wed, 20 Jan 2016 16:34:17 +0100
+Subject: softirq: split timer softirqs out of ksoftirqd
+
+The softirqd runs in -RT with SCHED_FIFO (prio 1) and deals mostly with
+timer wakeup which can not happen in hardirq context. The prio has been
+risen from the normal SCHED_OTHER so the timer wakeup does not happen
+too late.
+With enough networking load it is possible that the system never goes
+idle and schedules ksoftirqd and everything else with a higher priority.
+One of the tasks left behind is one of RCU's threads and so we see stalls
+and eventually run out of memory.
+This patch moves the TIMER and HRTIMER softirqs out of the `ksoftirqd`
+thread into its own `ktimersoftd`. The former can now run SCHED_OTHER
+(same as mainline) and the latter at SCHED_FIFO due to the wakeups.
+
+From networking point of view: The NAPI callback runs after the network
+interrupt thread completes. If its run time takes too long the NAPI code
+itself schedules the `ksoftirqd`. Here in the thread it can run at
+SCHED_OTHER priority and it won't defer RCU anymore.
+
+Cc: stable-rt@vger.kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ kernel/softirq.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++--------
+ 1 file changed, 74 insertions(+), 11 deletions(-)
+
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -58,6 +58,10 @@ EXPORT_SYMBOL(irq_stat);
+ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;
+
+ DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
++#ifdef CONFIG_PREEMPT_RT_FULL
++#define TIMER_SOFTIRQS ((1 << TIMER_SOFTIRQ) | (1 << HRTIMER_SOFTIRQ))
++DEFINE_PER_CPU(struct task_struct *, ktimer_softirqd);
++#endif
+
+ const char * const softirq_to_name[NR_SOFTIRQS] = {
+ "HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "BLOCK_IOPOLL",
+@@ -171,6 +175,17 @@ static void wakeup_softirqd(void)
+ wake_up_process(tsk);
+ }
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void wakeup_timer_softirqd(void)
++{
++ /* Interrupts are disabled: no need to stop preemption */
++ struct task_struct *tsk = __this_cpu_read(ktimer_softirqd);
++
++ if (tsk && tsk->state != TASK_RUNNING)
++ wake_up_process(tsk);
++}
++#endif
++
+ static void handle_softirq(unsigned int vec_nr)
+ {
+ struct softirq_action *h = softirq_vec + vec_nr;
+@@ -473,7 +488,6 @@ void __raise_softirq_irqoff(unsigned int
+ static inline void local_bh_disable_nort(void) { local_bh_disable(); }
+ static inline void _local_bh_enable_nort(void) { _local_bh_enable(); }
+ static void ksoftirqd_set_sched_params(unsigned int cpu) { }
+-static void ksoftirqd_clr_sched_params(unsigned int cpu, bool online) { }
+
+ #else /* !PREEMPT_RT_FULL */
+
+@@ -618,8 +632,12 @@ void thread_do_softirq(void)
+
+ static void do_raise_softirq_irqoff(unsigned int nr)
+ {
++ unsigned int mask;
++
++ mask = 1UL << nr;
++
+ trace_softirq_raise(nr);
+- or_softirq_pending(1UL << nr);
++ or_softirq_pending(mask);
+
+ /*
+ * If we are not in a hard interrupt and inside a bh disabled
+@@ -628,16 +646,30 @@ static void do_raise_softirq_irqoff(unsi
+ * delegate it to ksoftirqd.
+ */
+ if (!in_irq() && current->softirq_nestcnt)
+- current->softirqs_raised |= (1U << nr);
+- else if (__this_cpu_read(ksoftirqd))
+- __this_cpu_read(ksoftirqd)->softirqs_raised |= (1U << nr);
++ current->softirqs_raised |= mask;
++ else if (!__this_cpu_read(ksoftirqd) || !__this_cpu_read(ktimer_softirqd))
++ return;
++
++ if (mask & TIMER_SOFTIRQS)
++ __this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
++ else
++ __this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
++}
++
++static void wakeup_proper_softirq(unsigned int nr)
++{
++ if ((1UL << nr) & TIMER_SOFTIRQS)
++ wakeup_timer_softirqd();
++ else
++ wakeup_softirqd();
+ }
+
++
+ void __raise_softirq_irqoff(unsigned int nr)
+ {
+ do_raise_softirq_irqoff(nr);
+ if (!in_irq() && !current->softirq_nestcnt)
+- wakeup_softirqd();
++ wakeup_proper_softirq(nr);
+ }
+
+ /*
+@@ -663,7 +695,7 @@ void raise_softirq_irqoff(unsigned int n
+ * raise a WARN() if the condition is met.
+ */
+ if (!current->softirq_nestcnt)
+- wakeup_softirqd();
++ wakeup_proper_softirq(nr);
+ }
+
+ static inline int ksoftirqd_softirq_pending(void)
+@@ -676,22 +708,37 @@ static inline void _local_bh_enable_nort
+
+ static inline void ksoftirqd_set_sched_params(unsigned int cpu)
+ {
++ /* Take over all but timer pending softirqs when starting */
++ local_irq_disable();
++ current->softirqs_raised = local_softirq_pending() & ~TIMER_SOFTIRQS;
++ local_irq_enable();
++}
++
++static inline void ktimer_softirqd_set_sched_params(unsigned int cpu)
++{
+ struct sched_param param = { .sched_priority = 1 };
+
+ sched_setscheduler(current, SCHED_FIFO, &param);
+- /* Take over all pending softirqs when starting */
++
++ /* Take over timer pending softirqs when starting */
+ local_irq_disable();
+- current->softirqs_raised = local_softirq_pending();
++ current->softirqs_raised = local_softirq_pending() & TIMER_SOFTIRQS;
+ local_irq_enable();
+ }
+
+-static inline void ksoftirqd_clr_sched_params(unsigned int cpu, bool online)
++static inline void ktimer_softirqd_clr_sched_params(unsigned int cpu,
++ bool online)
+ {
+ struct sched_param param = { .sched_priority = 0 };
+
+ sched_setscheduler(current, SCHED_NORMAL, &param);
+ }
+
++static int ktimer_softirqd_should_run(unsigned int cpu)
++{
++ return current->softirqs_raised;
++}
++
+ #endif /* PREEMPT_RT_FULL */
+ /*
+ * Enter an interrupt context.
+@@ -741,6 +788,9 @@ static inline void invoke_softirq(void)
+ if (__this_cpu_read(ksoftirqd) &&
+ __this_cpu_read(ksoftirqd)->softirqs_raised)
+ wakeup_softirqd();
++ if (__this_cpu_read(ktimer_softirqd) &&
++ __this_cpu_read(ktimer_softirqd)->softirqs_raised)
++ wakeup_timer_softirqd();
+ local_irq_restore(flags);
+ #endif
+ }
+@@ -1173,17 +1223,30 @@ static struct notifier_block cpu_nfb = {
+ static struct smp_hotplug_thread softirq_threads = {
+ .store = &ksoftirqd,
+ .setup = ksoftirqd_set_sched_params,
+- .cleanup = ksoftirqd_clr_sched_params,
+ .thread_should_run = ksoftirqd_should_run,
+ .thread_fn = run_ksoftirqd,
+ .thread_comm = "ksoftirqd/%u",
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static struct smp_hotplug_thread softirq_timer_threads = {
++ .store = &ktimer_softirqd,
++ .setup = ktimer_softirqd_set_sched_params,
++ .cleanup = ktimer_softirqd_clr_sched_params,
++ .thread_should_run = ktimer_softirqd_should_run,
++ .thread_fn = run_ksoftirqd,
++ .thread_comm = "ktimersoftd/%u",
++};
++#endif
++
+ static __init int spawn_ksoftirqd(void)
+ {
+ register_cpu_notifier(&cpu_nfb);
+
+ BUG_ON(smpboot_register_percpu_thread(&softirq_threads));
++#ifdef CONFIG_PREEMPT_RT_FULL
++ BUG_ON(smpboot_register_percpu_thread(&softirq_timer_threads));
++#endif
+
+ return 0;
+ }
diff --git a/patches/sparc64-use-generic-rwsem-spinlocks-rt.patch b/patches/sparc64-use-generic-rwsem-spinlocks-rt.patch
index cb4a7da65c132b..2625c5f76536a5 100644
--- a/patches/sparc64-use-generic-rwsem-spinlocks-rt.patch
+++ b/patches/sparc64-use-generic-rwsem-spinlocks-rt.patch
@@ -1,7 +1,6 @@
-From d6a6675d436897cd1b09e299436df3499abd753e Mon Sep 17 00:00:00 2001
From: Allen Pais <allen.pais@oracle.com>
Date: Fri, 13 Dec 2013 09:44:41 +0530
-Subject: [PATCH 1/3] sparc64: use generic rwsem spinlocks rt
+Subject: sparc64: use generic rwsem spinlocks rt
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
diff --git a/patches/tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch b/patches/tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch
index e6835955905725..4ab5c45e78b6c1 100644
--- a/patches/tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch
+++ b/patches/tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch
@@ -43,7 +43,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
-@@ -482,8 +482,9 @@ static inline struct task_struct *this_c
+@@ -483,8 +483,9 @@ static inline struct task_struct *this_c
to be executed on some cpu at least once after this.
* If the tasklet is already scheduled, but its execution is still not
started, it will be executed only once.
@@ -55,7 +55,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* Tasklet is strictly serialized wrt itself, but not
wrt another tasklets. If client needs some intertask synchronization,
he makes it with spinlocks.
-@@ -508,27 +509,36 @@ struct tasklet_struct name = { NULL, 0,
+@@ -509,27 +510,36 @@ struct tasklet_struct name = { NULL, 0,
enum
{
TASKLET_STATE_SCHED, /* Tasklet is scheduled for execution */
@@ -98,7 +98,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#define tasklet_unlock_wait(t) do { } while (0)
#define tasklet_unlock(t) do { } while (0)
#endif
-@@ -577,12 +587,7 @@ static inline void tasklet_disable(struc
+@@ -578,12 +588,7 @@ static inline void tasklet_disable(struc
smp_mb();
}
diff --git a/patches/timers-preempt-rt-support.patch b/patches/timers-preempt-rt-support.patch
index 2018ed3b3b1bfe..264c375d35617d 100644
--- a/patches/timers-preempt-rt-support.patch
+++ b/patches/timers-preempt-rt-support.patch
@@ -29,7 +29,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ * the base lock to check when the next timer is pending and so
+ * we assume the next jiffy.
+ */
-+ return basej;
++ return basem + TICK_NSEC;
+#endif
spin_lock(&base->lock);
if (base->active_timers) {
diff --git a/patches/timers-prepare-for-full-preemption.patch b/patches/timers-prepare-for-full-preemption.patch
index 3ba59d4e61e8de..866be4023ae1f1 100644
--- a/patches/timers-prepare-for-full-preemption.patch
+++ b/patches/timers-prepare-for-full-preemption.patch
@@ -12,8 +12,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
include/linux/timer.h | 2 +-
kernel/sched/core.c | 9 +++++++--
- kernel/time/timer.c | 39 +++++++++++++++++++++++++++++++++++++--
- 3 files changed, 45 insertions(+), 5 deletions(-)
+ kernel/time/timer.c | 41 ++++++++++++++++++++++++++++++++++++++---
+ 3 files changed, 46 insertions(+), 6 deletions(-)
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -100,6 +100,15 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/**
* del_timer - deactive a timer.
* @timer: the timer to be deactivated
+@@ -1063,7 +1093,7 @@ int try_to_del_timer_sync(struct timer_l
+ }
+ EXPORT_SYMBOL(try_to_del_timer_sync);
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+ /**
+ * del_timer_sync - deactivate a timer and wait for the handler to finish.
+ * @timer: the timer to be deactivated
@@ -1123,7 +1153,7 @@ int del_timer_sync(struct timer_list *ti
int ret = try_to_del_timer_sync(timer);
if (ret >= 0)
diff --git a/patches/trace-latency-hist-Consider-new-argument-when-probin.patch b/patches/trace-latency-hist-Consider-new-argument-when-probin.patch
new file mode 100644
index 00000000000000..1b2550ae79d566
--- /dev/null
+++ b/patches/trace-latency-hist-Consider-new-argument-when-probin.patch
@@ -0,0 +1,37 @@
+From: Carsten Emde <C.Emde@osadl.org>
+Date: Tue, 5 Jan 2016 10:21:59 +0100
+Subject: trace/latency-hist: Consider new argument when probing the
+ sched_switch tracer
+
+The sched_switch tracer has got a new argument. Fix the latency tracer
+accordingly.
+
+Recently: c73464b1c843 ("sched/core: Fix trace_sched_switch()") since
+v4.4-rc1.
+
+Signed-off-by: Carsten Emde <C.Emde@osadl.org>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ kernel/trace/latency_hist.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/kernel/trace/latency_hist.c
++++ b/kernel/trace/latency_hist.c
+@@ -117,7 +117,7 @@ static char *wakeup_latency_hist_dir_sha
+ static notrace void probe_wakeup_latency_hist_start(void *v,
+ struct task_struct *p);
+ static notrace void probe_wakeup_latency_hist_stop(void *v,
+- struct task_struct *prev, struct task_struct *next);
++ bool preempt, struct task_struct *prev, struct task_struct *next);
+ static notrace void probe_sched_migrate_task(void *,
+ struct task_struct *task, int cpu);
+ static struct enable_data wakeup_latency_enabled_data = {
+@@ -907,7 +907,7 @@ static notrace void probe_wakeup_latency
+ }
+
+ static notrace void probe_wakeup_latency_hist_stop(void *v,
+- struct task_struct *prev, struct task_struct *next)
++ bool preempt, struct task_struct *prev, struct task_struct *next)
+ {
+ unsigned long flags;
+ int cpu = task_cpu(next);
diff --git a/patches/upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch b/patches/upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch
index f26739a1185c86..71239503710d6e 100644
--- a/patches/upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch
+++ b/patches/upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch
@@ -37,7 +37,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/net/core/dev.c
+++ b/net/core/dev.c
-@@ -3540,7 +3540,7 @@ static int netif_rx_internal(struct sk_b
+@@ -3575,7 +3575,7 @@ static int netif_rx_internal(struct sk_b
struct rps_dev_flow voidflow, *rflow = &voidflow;
int cpu;
@@ -46,7 +46,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
rcu_read_lock();
cpu = get_rps_cpu(skb->dev, skb, &rflow);
-@@ -3550,13 +3550,13 @@ static int netif_rx_internal(struct sk_b
+@@ -3585,13 +3585,13 @@ static int netif_rx_internal(struct sk_b
ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
rcu_read_unlock();
diff --git a/patches/workqueue-distangle-from-rq-lock.patch b/patches/workqueue-distangle-from-rq-lock.patch
index 48d83bf5895576..68a5d2af599963 100644
--- a/patches/workqueue-distangle-from-rq-lock.patch
+++ b/patches/workqueue-distangle-from-rq-lock.patch
@@ -31,7 +31,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -1744,10 +1744,6 @@ static inline void ttwu_activate(struct
+@@ -1735,10 +1735,6 @@ static inline void ttwu_activate(struct
{
activate_task(rq, p, en_flags);
p->on_rq = TASK_ON_RQ_QUEUED;
@@ -42,7 +42,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
/*
-@@ -2064,52 +2060,6 @@ try_to_wake_up(struct task_struct *p, un
+@@ -2055,52 +2051,6 @@ try_to_wake_up(struct task_struct *p, un
}
/**
@@ -95,7 +95,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* wake_up_process - Wake up a specific process
* @p: The process to be woken up.
*
-@@ -3343,21 +3293,6 @@ static void __sched notrace __schedule(b
+@@ -3282,21 +3232,6 @@ static void __sched notrace __schedule(b
} else {
deactivate_task(rq, prev, DEQUEUE_SLEEP);
prev->on_rq = 0;
@@ -117,7 +117,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
switch_count = &prev->nvcsw;
}
-@@ -3390,6 +3325,14 @@ static inline void sched_submit_work(str
+@@ -3329,6 +3264,14 @@ static inline void sched_submit_work(str
{
if (!tsk->state || tsk_is_pi_blocked(tsk))
return;
@@ -132,7 +132,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* If we are going to sleep and we have plugged IO queued,
* make sure to submit it to avoid deadlocks.
-@@ -3398,6 +3341,12 @@ static inline void sched_submit_work(str
+@@ -3337,6 +3280,12 @@ static inline void sched_submit_work(str
blk_schedule_flush_plug(tsk);
}
@@ -145,7 +145,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
asmlinkage __visible void __sched schedule(void)
{
struct task_struct *tsk = current;
-@@ -3408,6 +3357,7 @@ asmlinkage __visible void __sched schedu
+@@ -3347,6 +3296,7 @@ asmlinkage __visible void __sched schedu
__schedule(false);
sched_preempt_enable_no_resched();
} while (need_resched());
diff --git a/patches/workqueue-prevent-deadlock-stall.patch b/patches/workqueue-prevent-deadlock-stall.patch
index e52b942baea5c6..590931ab7386f0 100644
--- a/patches/workqueue-prevent-deadlock-stall.patch
+++ b/patches/workqueue-prevent-deadlock-stall.patch
@@ -43,7 +43,7 @@ Cc: Steven Rostedt <rostedt@goodmis.org>
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
-@@ -3439,9 +3439,8 @@ static void __sched notrace __schedule(b
+@@ -3378,9 +3378,8 @@ static void __sched notrace __schedule(b
static inline void sched_submit_work(struct task_struct *tsk)
{
@@ -54,7 +54,7 @@ Cc: Steven Rostedt <rostedt@goodmis.org>
/*
* If a worker went to sleep, notify and ask workqueue whether
* it wants to wake up a task to maintain concurrency.
-@@ -3449,6 +3448,10 @@ static inline void sched_submit_work(str
+@@ -3388,6 +3387,10 @@ static inline void sched_submit_work(str
if (tsk->flags & PF_WQ_WORKER)
wq_worker_sleeping(tsk);
diff --git a/patches/x86-preempt-lazy.patch b/patches/x86-preempt-lazy.patch
index ff3829dc9d966a..6f860841f25635 100644
--- a/patches/x86-preempt-lazy.patch
+++ b/patches/x86-preempt-lazy.patch
@@ -7,12 +7,12 @@ Implement the x86 pieces for lazy preempt.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/Kconfig | 1 +
- arch/x86/entry/common.c | 2 +-
+ arch/x86/entry/common.c | 4 ++--
arch/x86/entry/entry_32.S | 16 ++++++++++++++++
arch/x86/entry/entry_64.S | 16 ++++++++++++++++
arch/x86/include/asm/thread_info.h | 6 ++++++
arch/x86/kernel/asm-offsets.c | 2 ++
- 6 files changed, 42 insertions(+), 1 deletion(-)
+ 6 files changed, 43 insertions(+), 2 deletions(-)
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -26,6 +26,15 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
select ANON_INODES
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
+@@ -220,7 +220,7 @@ long syscall_trace_enter(struct pt_regs
+
+ #define EXIT_TO_USERMODE_LOOP_FLAGS \
+ (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
+- _TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY)
++ _TIF_NEED_RESCHED_MASK | _TIF_USER_RETURN_NOTIFY)
+
+ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
+ {
@@ -236,7 +236,7 @@ static void exit_to_usermode_loop(struct
/* We have work to do. */
local_irq_enable();