aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMark Tomlinson <mark.tomlinson@alliedtelesis.co.nz>2021-05-27 17:43:22 +0900
committerNobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>2021-05-27 17:44:14 +0900
commitb9f85d8d4796a53b4998b9e29746a781ca700f03 (patch)
tree0994dbbf15f8f2ae833b08404701448008398207
parentd257d252648e3a1dd23f683d89f5f2bca1c016ca (diff)
downloadlinux-stable-linux-4.4.y.tar.gz
netfilter: x_tables: Use correct memory barriers.iwamatsu/linux-4.4.y-20210527linux-4.4.y
commit 175e476b8cdf2a4de7432583b49c871345e4f8a1 upstream. When a new table value was assigned, it was followed by a write memory barrier. This ensured that all writes before this point would complete before any writes after this point. However, to determine whether the rules are unused, the sequence counter is read. To ensure that all writes have been done before these reads, a full memory barrier is needed, not just a write memory barrier. The same argument applies when incrementing the counter, before the rules are read. Changing to using smp_mb() instead of smp_wmb() fixes the kernel panic reported in cc00bcaa5899 (which is still present), while still maintaining the same speed of replacing tables. The smb_mb() barriers potentially slow the packet path, however testing has shown no measurable change in performance on a 4-core MIPS64 platform. Fixes: 7f5c6d4f665b ("netfilter: get rid of atomic ops in fast path") Signed-off-by: Mark Tomlinson <mark.tomlinson@alliedtelesis.co.nz> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> [Ported to stable, affected barrier is added by d3d40f237480abf3268956daf18cdc56edd32834 in mainline] Signed-off-by: Pavel Machek (CIP) <pavel@denx.de> Signed-off-by: Nobuhiro Iwamatsu (CIP) <nobuhiro1.iwamatsu@toshiba.co.jp>
-rw-r--r--include/linux/netfilter/x_tables.h2
-rw-r--r--net/netfilter/x_tables.c3
2 files changed, 4 insertions, 1 deletions
diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h
index 6923e4049de3af..304b60b4952627 100644
--- a/include/linux/netfilter/x_tables.h
+++ b/include/linux/netfilter/x_tables.h
@@ -327,7 +327,7 @@ static inline unsigned int xt_write_recseq_begin(void)
* since addend is most likely 1
*/
__this_cpu_add(xt_recseq.sequence, addend);
- smp_wmb();
+ smp_mb();
return addend;
}
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
index 7e261fab7ef8de..480ccd52a73fdb 100644
--- a/net/netfilter/x_tables.c
+++ b/net/netfilter/x_tables.c
@@ -1140,6 +1140,9 @@ xt_replace_table(struct xt_table *table,
smp_wmb();
table->private = newinfo;
+ /* make sure all cpus see new ->private value */
+ smp_mb();
+
/*
* Even though table entries have now been swapped, other CPU's
* may still be using the old entries. This is okay, because