summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPaul Gortmaker <paul.gortmaker@windriver.com>2016-10-02 16:54:24 -0400
committerPaul Gortmaker <paul.gortmaker@windriver.com>2016-10-02 17:12:49 -0400
commit59c07bdfb403bc4bf6585ef8d8526a9d8c57ebf2 (patch)
treeda073e6ac1f7fa70eddf00b34cdb78e6440e010c
parentcfdec3d17b5c5d38a10507ecda59de337f0ae21c (diff)
download4.8-rt-patches-v4.8-rt.tar.gz
net: drop the always take qdisc busylock patchHEADv4.8-rtmaster
This used to be a bitfield with __QDISC___STATE_RUNNING, but after mainline f9eb8aea2a1e ("net_sched: transform qdisc running bit into a seqcount") it is no longer that simple. If we persist in faking out the lock state with this commit, we'll see the followiing from migrate_enable when SCHED_DEBUG is on: WARNING: CPU: 0 PID: 1464 at kernel/sched/core.c:3415 migrate_enable+0x12a/0x160 CPU: 0 PID: 1464 Comm: dhclient Not tainted 4.8.0-rc8-00284-g247e358567b5 #40 Hardware name: Dell Inc. OptiPlex 760 /0M858N, BIOS A16 08/06/2013 0000000000000000 ffff880037083bc8 ffffffff81308435 0000000000000000 0000000000000000 ffff880037083c08 ffffffff8105c68f 00000d5737084000 ffff880072083500 0000000000000002 ffff880072167c00 ffffffff81ec9aa0 Call Trace: [<ffffffff81308435>] dump_stack+0x4f/0x6a [<ffffffff8105c68f>] __warn+0xdf/0x100 [<ffffffff8105c768>] warn_slowpath_null+0x18/0x20 [<ffffffff8108161a>] migrate_enable+0x12a/0x160 [<ffffffff81943bb2>] rt_spin_unlock+0x22/0x30 [<ffffffff81782b74>] __dev_queue_xmit+0x384/0x580 [<ffffffff81782d7b>] dev_queue_xmit+0xb/0x10 [<ffffffff8188ded0>] packet_sendmsg+0xb30/0x1390 [<ffffffff81943bb2>] ? rt_spin_unlock+0x22/0x30 [<ffffffff8109c8f3>] ? __wake_up_sync_key+0x43/0x50 [<ffffffff812ade4c>] ? sock_has_perm+0x4c/0x90 [<ffffffff8176742d>] ? sock_def_readable+0x6d/0x70 [<ffffffff817637e3>] sock_sendmsg+0x33/0x40 [<ffffffff81763866>] sock_write_iter+0x76/0xd0 [<ffffffff8119b19f>] __vfs_write+0xbf/0x120 [<ffffffff8119c213>] vfs_write+0xb3/0x1b0 [<ffffffff8104ed48>] ? __do_page_fault+0x1c8/0x570 [<ffffffff8119d524>] SyS_write+0x44/0xa0 [<ffffffff81943e1b>] entry_SYSCALL_64_fastpath+0x13/0x8f Given the mainline commit above and the one above it that reduces the scope of the root lock usage: commit edb09eb17ed8 (net: sched: do not acquire qdisc spinlock in qdisc/class stats dump"), and the resulting splat, we drop this change for now. If we redo it at a later date, we should address the obviously false use of "unlikely" that it causes. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
-rw-r--r--patches/net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch37
-rw-r--r--patches/series3
2 files changed, 2 insertions, 38 deletions
diff --git a/patches/net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch b/patches/net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch
deleted file mode 100644
index 7587708ea2f9e1..00000000000000
--- a/patches/net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-From 6e3be85b547b6f2b5e1740acf874b95b6721d873 Mon Sep 17 00:00:00 2001
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Wed, 30 Mar 2016 13:36:29 +0200
-Subject: [PATCH] net: dev: always take qdisc's busylock in __dev_xmit_skb()
-
-The root-lock is dropped before dev_hard_start_xmit() is invoked and after
-setting the __QDISC___STATE_RUNNING bit. If this task is now pushed away
-by a task with a higher priority then the task with the higher priority
-won't be able to submit packets to the NIC directly instead they will be
-enqueued into the Qdisc. The NIC will remain idle until the task(s) with
-higher priority leave the CPU and the task with lower priority gets back
-and finishes the job.
-
-If we take always the busylock we ensure that the RT task can boost the
-low-prio task and submit the packet.
-
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
-diff --git a/net/core/dev.c b/net/core/dev.c
-index b1e7a30b0cdf..aff6965fe941 100644
---- a/net/core/dev.c
-+++ b/net/core/dev.c
-@@ -3084,7 +3084,11 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
- * This permits qdisc->running owner to get the lock more
- * often and dequeue packets faster.
- */
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ contended = true;
-+#else
- contended = qdisc_is_running(q);
-+#endif
- if (unlikely(contended))
- spin_lock(&q->busylock);
-
---
-2.5.0
-
diff --git a/patches/series b/patches/series
index 7e04717346b35f..285bb41cd5450c 100644
--- a/patches/series
+++ b/patches/series
@@ -430,7 +430,8 @@ skbufhead-raw-lock.patch
net-core-cpuhotplug-drain-input_pkt_queue-lockless.patch
net-move-xmit_recursion-to-per-task-variable-on-RT.patch
net-provide-a-way-to-delegate-processing-a-softirq-t.patch
-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch
+# not valid after mainline commit f9eb8aea2a1e seqcount addition
+# net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch
net-add-back-the-missing-serialization-in-ip_send_un.patch
net-add-a-lock-around-icmp_sk.patch