aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorSasha Levin <sashal@kernel.org>2024-04-22 18:34:55 -0400
committerSasha Levin <sashal@kernel.org>2024-04-22 18:34:55 -0400
commitd6fc906b7aa432d098330a5918a2eccaebd9cdf5 (patch)
treea72a20cb076179e6363b83a6e935ac7b97a0715b
parent3f30c88bf73cf54e5788338aebc1e2678504e98a (diff)
downloadstable-queue-d6fc906b7aa432d098330a5918a2eccaebd9cdf5.tar.gz
Fixes for 5.15
Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r--queue-5.15/clk-get-runtime-pm-before-walking-tree-during-disabl.patch344
-rw-r--r--queue-5.15/clk-initialize-struct-clk_core-kref-earlier.patch98
-rw-r--r--queue-5.15/clk-mark-all_lists-as-const.patch45
-rw-r--r--queue-5.15/clk-print-an-info-line-before-disabling-unused-clock.patch42
-rw-r--r--queue-5.15/clk-remove-extra-empty-line.patch35
-rw-r--r--queue-5.15/clk-remove-prepare_lock-hold-assertion-in-__clk_rele.patch44
-rw-r--r--queue-5.15/series8
-rw-r--r--queue-5.15/x86-bugs-fix-bhi-retpoline-check.patch63
-rw-r--r--queue-5.15/x86-cpufeatures-fix-dependencies-for-gfni-vaes-and-v.patch59
9 files changed, 738 insertions, 0 deletions
diff --git a/queue-5.15/clk-get-runtime-pm-before-walking-tree-during-disabl.patch b/queue-5.15/clk-get-runtime-pm-before-walking-tree-during-disabl.patch
new file mode 100644
index 0000000000..7c535c9068
--- /dev/null
+++ b/queue-5.15/clk-get-runtime-pm-before-walking-tree-during-disabl.patch
@@ -0,0 +1,344 @@
+From af6f525156361871f5c21e70c714e141a1803bbd Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 25 Mar 2024 11:41:58 -0700
+Subject: clk: Get runtime PM before walking tree during disable_unused
+
+From: Stephen Boyd <sboyd@kernel.org>
+
+[ Upstream commit e581cf5d216289ef292d1a4036d53ce90e122469 ]
+
+Doug reported [1] the following hung task:
+
+ INFO: task swapper/0:1 blocked for more than 122 seconds.
+ Not tainted 5.15.149-21875-gf795ebc40eb8 #1
+ "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ task:swapper/0 state:D stack: 0 pid: 1 ppid: 0 flags:0x00000008
+ Call trace:
+ __switch_to+0xf4/0x1f4
+ __schedule+0x418/0xb80
+ schedule+0x5c/0x10c
+ rpm_resume+0xe0/0x52c
+ rpm_resume+0x178/0x52c
+ __pm_runtime_resume+0x58/0x98
+ clk_pm_runtime_get+0x30/0xb0
+ clk_disable_unused_subtree+0x58/0x208
+ clk_disable_unused_subtree+0x38/0x208
+ clk_disable_unused_subtree+0x38/0x208
+ clk_disable_unused_subtree+0x38/0x208
+ clk_disable_unused_subtree+0x38/0x208
+ clk_disable_unused+0x4c/0xe4
+ do_one_initcall+0xcc/0x2d8
+ do_initcall_level+0xa4/0x148
+ do_initcalls+0x5c/0x9c
+ do_basic_setup+0x24/0x30
+ kernel_init_freeable+0xec/0x164
+ kernel_init+0x28/0x120
+ ret_from_fork+0x10/0x20
+ INFO: task kworker/u16:0:9 blocked for more than 122 seconds.
+ Not tainted 5.15.149-21875-gf795ebc40eb8 #1
+ "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ task:kworker/u16:0 state:D stack: 0 pid: 9 ppid: 2 flags:0x00000008
+ Workqueue: events_unbound deferred_probe_work_func
+ Call trace:
+ __switch_to+0xf4/0x1f4
+ __schedule+0x418/0xb80
+ schedule+0x5c/0x10c
+ schedule_preempt_disabled+0x2c/0x48
+ __mutex_lock+0x238/0x488
+ __mutex_lock_slowpath+0x1c/0x28
+ mutex_lock+0x50/0x74
+ clk_prepare_lock+0x7c/0x9c
+ clk_core_prepare_lock+0x20/0x44
+ clk_prepare+0x24/0x30
+ clk_bulk_prepare+0x40/0xb0
+ mdss_runtime_resume+0x54/0x1c8
+ pm_generic_runtime_resume+0x30/0x44
+ __genpd_runtime_resume+0x68/0x7c
+ genpd_runtime_resume+0x108/0x1f4
+ __rpm_callback+0x84/0x144
+ rpm_callback+0x30/0x88
+ rpm_resume+0x1f4/0x52c
+ rpm_resume+0x178/0x52c
+ __pm_runtime_resume+0x58/0x98
+ __device_attach+0xe0/0x170
+ device_initial_probe+0x1c/0x28
+ bus_probe_device+0x3c/0x9c
+ device_add+0x644/0x814
+ mipi_dsi_device_register_full+0xe4/0x170
+ devm_mipi_dsi_device_register_full+0x28/0x70
+ ti_sn_bridge_probe+0x1dc/0x2c0
+ auxiliary_bus_probe+0x4c/0x94
+ really_probe+0xcc/0x2c8
+ __driver_probe_device+0xa8/0x130
+ driver_probe_device+0x48/0x110
+ __device_attach_driver+0xa4/0xcc
+ bus_for_each_drv+0x8c/0xd8
+ __device_attach+0xf8/0x170
+ device_initial_probe+0x1c/0x28
+ bus_probe_device+0x3c/0x9c
+ deferred_probe_work_func+0x9c/0xd8
+ process_one_work+0x148/0x518
+ worker_thread+0x138/0x350
+ kthread+0x138/0x1e0
+ ret_from_fork+0x10/0x20
+
+The first thread is walking the clk tree and calling
+clk_pm_runtime_get() to power on devices required to read the clk
+hardware via struct clk_ops::is_enabled(). This thread holds the clk
+prepare_lock, and is trying to runtime PM resume a device, when it finds
+that the device is in the process of resuming so the thread schedule()s
+away waiting for the device to finish resuming before continuing. The
+second thread is runtime PM resuming the same device, but the runtime
+resume callback is calling clk_prepare(), trying to grab the
+prepare_lock waiting on the first thread.
+
+This is a classic ABBA deadlock. To properly fix the deadlock, we must
+never runtime PM resume or suspend a device with the clk prepare_lock
+held. Actually doing that is near impossible today because the global
+prepare_lock would have to be dropped in the middle of the tree, the
+device runtime PM resumed/suspended, and then the prepare_lock grabbed
+again to ensure consistency of the clk tree topology. If anything
+changes with the clk tree in the meantime, we've lost and will need to
+start the operation all over again.
+
+Luckily, most of the time we're simply incrementing or decrementing the
+runtime PM count on an active device, so we don't have the chance to
+schedule away with the prepare_lock held. Let's fix this immediate
+problem that can be triggered more easily by simply booting on Qualcomm
+sc7180.
+
+Introduce a list of clk_core structures that have been registered, or
+are in the process of being registered, that require runtime PM to
+operate. Iterate this list and call clk_pm_runtime_get() on each of them
+without holding the prepare_lock during clk_disable_unused(). This way
+we can be certain that the runtime PM state of the devices will be
+active and resumed so we can't schedule away while walking the clk tree
+with the prepare_lock held. Similarly, call clk_pm_runtime_put() without
+the prepare_lock held to properly drop the runtime PM reference. We
+remove the calls to clk_pm_runtime_{get,put}() in this path because
+they're superfluous now that we know the devices are runtime resumed.
+
+Reported-by: Douglas Anderson <dianders@chromium.org>
+Closes: https://lore.kernel.org/all/20220922084322.RFC.2.I375b6b9e0a0a5348962f004beb3dafee6a12dfbb@changeid/ [1]
+Closes: https://issuetracker.google.com/328070191
+Cc: Marek Szyprowski <m.szyprowski@samsung.com>
+Cc: Ulf Hansson <ulf.hansson@linaro.org>
+Cc: Krzysztof Kozlowski <krzk@kernel.org>
+Fixes: 9a34b45397e5 ("clk: Add support for runtime PM")
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Link: https://lore.kernel.org/r/20240325184204.745706-5-sboyd@kernel.org
+Reviewed-by: Douglas Anderson <dianders@chromium.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 117 +++++++++++++++++++++++++++++++++++++++++-----
+ 1 file changed, 105 insertions(+), 12 deletions(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 5cbc42882dce4..a05b5bca64250 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -37,6 +37,10 @@ static HLIST_HEAD(clk_root_list);
+ static HLIST_HEAD(clk_orphan_list);
+ static LIST_HEAD(clk_notifier_list);
+
++/* List of registered clks that use runtime PM */
++static HLIST_HEAD(clk_rpm_list);
++static DEFINE_MUTEX(clk_rpm_list_lock);
++
+ static const struct hlist_head *all_lists[] = {
+ &clk_root_list,
+ &clk_orphan_list,
+@@ -59,6 +63,7 @@ struct clk_core {
+ struct clk_hw *hw;
+ struct module *owner;
+ struct device *dev;
++ struct hlist_node rpm_node;
+ struct device_node *of_node;
+ struct clk_core *parent;
+ struct clk_parent_map *parents;
+@@ -129,6 +134,89 @@ static void clk_pm_runtime_put(struct clk_core *core)
+ pm_runtime_put_sync(core->dev);
+ }
+
++/**
++ * clk_pm_runtime_get_all() - Runtime "get" all clk provider devices
++ *
++ * Call clk_pm_runtime_get() on all runtime PM enabled clks in the clk tree so
++ * that disabling unused clks avoids a deadlock where a device is runtime PM
++ * resuming/suspending and the runtime PM callback is trying to grab the
++ * prepare_lock for something like clk_prepare_enable() while
++ * clk_disable_unused_subtree() holds the prepare_lock and is trying to runtime
++ * PM resume/suspend the device as well.
++ *
++ * Context: Acquires the 'clk_rpm_list_lock' and returns with the lock held on
++ * success. Otherwise the lock is released on failure.
++ *
++ * Return: 0 on success, negative errno otherwise.
++ */
++static int clk_pm_runtime_get_all(void)
++{
++ int ret;
++ struct clk_core *core, *failed;
++
++ /*
++ * Grab the list lock to prevent any new clks from being registered
++ * or unregistered until clk_pm_runtime_put_all().
++ */
++ mutex_lock(&clk_rpm_list_lock);
++
++ /*
++ * Runtime PM "get" all the devices that are needed for the clks
++ * currently registered. Do this without holding the prepare_lock, to
++ * avoid the deadlock.
++ */
++ hlist_for_each_entry(core, &clk_rpm_list, rpm_node) {
++ ret = clk_pm_runtime_get(core);
++ if (ret) {
++ failed = core;
++ pr_err("clk: Failed to runtime PM get '%s' for clk '%s'\n",
++ dev_name(failed->dev), failed->name);
++ goto err;
++ }
++ }
++
++ return 0;
++
++err:
++ hlist_for_each_entry(core, &clk_rpm_list, rpm_node) {
++ if (core == failed)
++ break;
++
++ clk_pm_runtime_put(core);
++ }
++ mutex_unlock(&clk_rpm_list_lock);
++
++ return ret;
++}
++
++/**
++ * clk_pm_runtime_put_all() - Runtime "put" all clk provider devices
++ *
++ * Put the runtime PM references taken in clk_pm_runtime_get_all() and release
++ * the 'clk_rpm_list_lock'.
++ */
++static void clk_pm_runtime_put_all(void)
++{
++ struct clk_core *core;
++
++ hlist_for_each_entry(core, &clk_rpm_list, rpm_node)
++ clk_pm_runtime_put(core);
++ mutex_unlock(&clk_rpm_list_lock);
++}
++
++static void clk_pm_runtime_init(struct clk_core *core)
++{
++ struct device *dev = core->dev;
++
++ if (dev && pm_runtime_enabled(dev)) {
++ core->rpm_enabled = true;
++
++ mutex_lock(&clk_rpm_list_lock);
++ hlist_add_head(&core->rpm_node, &clk_rpm_list);
++ mutex_unlock(&clk_rpm_list_lock);
++ }
++}
++
+ /*** locking ***/
+ static void clk_prepare_lock(void)
+ {
+@@ -1252,9 +1340,6 @@ static void __init clk_unprepare_unused_subtree(struct clk_core *core)
+ if (core->flags & CLK_IGNORE_UNUSED)
+ return;
+
+- if (clk_pm_runtime_get(core))
+- return;
+-
+ if (clk_core_is_prepared(core)) {
+ trace_clk_unprepare(core);
+ if (core->ops->unprepare_unused)
+@@ -1263,8 +1348,6 @@ static void __init clk_unprepare_unused_subtree(struct clk_core *core)
+ core->ops->unprepare(core->hw);
+ trace_clk_unprepare_complete(core);
+ }
+-
+- clk_pm_runtime_put(core);
+ }
+
+ static void __init clk_disable_unused_subtree(struct clk_core *core)
+@@ -1280,9 +1363,6 @@ static void __init clk_disable_unused_subtree(struct clk_core *core)
+ if (core->flags & CLK_OPS_PARENT_ENABLE)
+ clk_core_prepare_enable(core->parent);
+
+- if (clk_pm_runtime_get(core))
+- goto unprepare_out;
+-
+ flags = clk_enable_lock();
+
+ if (core->enable_count)
+@@ -1307,8 +1387,6 @@ static void __init clk_disable_unused_subtree(struct clk_core *core)
+
+ unlock_out:
+ clk_enable_unlock(flags);
+- clk_pm_runtime_put(core);
+-unprepare_out:
+ if (core->flags & CLK_OPS_PARENT_ENABLE)
+ clk_core_disable_unprepare(core->parent);
+ }
+@@ -1324,6 +1402,7 @@ __setup("clk_ignore_unused", clk_ignore_unused_setup);
+ static int __init clk_disable_unused(void)
+ {
+ struct clk_core *core;
++ int ret;
+
+ if (clk_ignore_unused) {
+ pr_warn("clk: Not disabling unused clocks\n");
+@@ -1332,6 +1411,13 @@ static int __init clk_disable_unused(void)
+
+ pr_info("clk: Disabling unused clocks\n");
+
++ ret = clk_pm_runtime_get_all();
++ if (ret)
++ return ret;
++ /*
++ * Grab the prepare lock to keep the clk topology stable while iterating
++ * over clks.
++ */
+ clk_prepare_lock();
+
+ hlist_for_each_entry(core, &clk_root_list, child_node)
+@@ -1348,6 +1434,8 @@ static int __init clk_disable_unused(void)
+
+ clk_prepare_unlock();
+
++ clk_pm_runtime_put_all();
++
+ return 0;
+ }
+ late_initcall_sync(clk_disable_unused);
+@@ -3887,6 +3975,12 @@ static void __clk_release(struct kref *ref)
+ {
+ struct clk_core *core = container_of(ref, struct clk_core, ref);
+
++ if (core->rpm_enabled) {
++ mutex_lock(&clk_rpm_list_lock);
++ hlist_del(&core->rpm_node);
++ mutex_unlock(&clk_rpm_list_lock);
++ }
++
+ clk_core_free_parent_map(core);
+ kfree_const(core->name);
+ kfree(core);
+@@ -3926,9 +4020,8 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ }
+ core->ops = init->ops;
+
+- if (dev && pm_runtime_enabled(dev))
+- core->rpm_enabled = true;
+ core->dev = dev;
++ clk_pm_runtime_init(core);
+ core->of_node = np;
+ if (dev && dev->driver)
+ core->owner = dev->driver->owner;
+--
+2.43.0
+
diff --git a/queue-5.15/clk-initialize-struct-clk_core-kref-earlier.patch b/queue-5.15/clk-initialize-struct-clk_core-kref-earlier.patch
new file mode 100644
index 0000000000..4efbf121c2
--- /dev/null
+++ b/queue-5.15/clk-initialize-struct-clk_core-kref-earlier.patch
@@ -0,0 +1,98 @@
+From db14d8a4d9cb84b23d539fc1f6726eb2b7683f59 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 25 Mar 2024 11:41:57 -0700
+Subject: clk: Initialize struct clk_core kref earlier
+
+From: Stephen Boyd <sboyd@kernel.org>
+
+[ Upstream commit 9d05ae531c2cff20d5d527f04e28d28e04379929 ]
+
+Initialize this kref once we allocate memory for the struct clk_core so
+that we can reuse the release function to free any memory associated
+with the structure. This mostly consolidates code, but also clarifies
+that the kref lifetime exists once the container structure (struct
+clk_core) is allocated instead of leaving it in a half-baked state for
+most of __clk_core_init().
+
+Reviewed-by: Douglas Anderson <dianders@chromium.org>
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Link: https://lore.kernel.org/r/20240325184204.745706-4-sboyd@kernel.org
+Stable-dep-of: e581cf5d2162 ("clk: Get runtime PM before walking tree during disable_unused")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 28 +++++++++++++---------------
+ 1 file changed, 13 insertions(+), 15 deletions(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index de1235d7b659a..5cbc42882dce4 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3654,8 +3654,6 @@ static int __clk_core_init(struct clk_core *core)
+ }
+
+ clk_core_reparent_orphans_nolock();
+-
+- kref_init(&core->ref);
+ out:
+ clk_pm_runtime_put(core);
+ unlock:
+@@ -3884,6 +3882,16 @@ static void clk_core_free_parent_map(struct clk_core *core)
+ kfree(core->parents);
+ }
+
++/* Free memory allocated for a struct clk_core */
++static void __clk_release(struct kref *ref)
++{
++ struct clk_core *core = container_of(ref, struct clk_core, ref);
++
++ clk_core_free_parent_map(core);
++ kfree_const(core->name);
++ kfree(core);
++}
++
+ static struct clk *
+ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ {
+@@ -3904,6 +3912,8 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ goto fail_out;
+ }
+
++ kref_init(&core->ref);
++
+ core->name = kstrdup_const(init->name, GFP_KERNEL);
+ if (!core->name) {
+ ret = -ENOMEM;
+@@ -3958,12 +3968,10 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ hw->clk = NULL;
+
+ fail_create_clk:
+- clk_core_free_parent_map(core);
+ fail_parents:
+ fail_ops:
+- kfree_const(core->name);
+ fail_name:
+- kfree(core);
++ kref_put(&core->ref, __clk_release);
+ fail_out:
+ return ERR_PTR(ret);
+ }
+@@ -4043,16 +4051,6 @@ int of_clk_hw_register(struct device_node *node, struct clk_hw *hw)
+ }
+ EXPORT_SYMBOL_GPL(of_clk_hw_register);
+
+-/* Free memory allocated for a clock. */
+-static void __clk_release(struct kref *ref)
+-{
+- struct clk_core *core = container_of(ref, struct clk_core, ref);
+-
+- clk_core_free_parent_map(core);
+- kfree_const(core->name);
+- kfree(core);
+-}
+-
+ /*
+ * Empty clk_ops for unregistered clocks. These are used temporarily
+ * after clk_unregister() was called on a clock and until last clock
+--
+2.43.0
+
diff --git a/queue-5.15/clk-mark-all_lists-as-const.patch b/queue-5.15/clk-mark-all_lists-as-const.patch
new file mode 100644
index 0000000000..2879c7cf8d
--- /dev/null
+++ b/queue-5.15/clk-mark-all_lists-as-const.patch
@@ -0,0 +1,45 @@
+From a969fe7e86dca13b94b390debf538333150a5a45 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 17 Feb 2022 14:05:53 -0800
+Subject: clk: Mark 'all_lists' as const
+
+From: Stephen Boyd <sboyd@kernel.org>
+
+[ Upstream commit 75061a6ff49ba3482c6319ded0c26e6a526b0967 ]
+
+This list array doesn't change at runtime. Mark it const to move to RO
+memory.
+
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Link: https://lore.kernel.org/r/20220217220554.2711696-2-sboyd@kernel.org
+Stable-dep-of: e581cf5d2162 ("clk: Get runtime PM before walking tree during disable_unused")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index acbe917cbe775..0d93537d46c34 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -37,7 +37,7 @@ static HLIST_HEAD(clk_root_list);
+ static HLIST_HEAD(clk_orphan_list);
+ static LIST_HEAD(clk_notifier_list);
+
+-static struct hlist_head *all_lists[] = {
++static const struct hlist_head *all_lists[] = {
+ &clk_root_list,
+ &clk_orphan_list,
+ NULL,
+@@ -4104,7 +4104,7 @@ static void clk_core_evict_parent_cache_subtree(struct clk_core *root,
+ /* Remove this clk from all parent caches */
+ static void clk_core_evict_parent_cache(struct clk_core *core)
+ {
+- struct hlist_head **lists;
++ const struct hlist_head **lists;
+ struct clk_core *root;
+
+ lockdep_assert_held(&prepare_lock);
+--
+2.43.0
+
diff --git a/queue-5.15/clk-print-an-info-line-before-disabling-unused-clock.patch b/queue-5.15/clk-print-an-info-line-before-disabling-unused-clock.patch
new file mode 100644
index 0000000000..ccd7dc19f5
--- /dev/null
+++ b/queue-5.15/clk-print-an-info-line-before-disabling-unused-clock.patch
@@ -0,0 +1,42 @@
+From 0045e9f8e8fcf0a861090ce8f480085851dd22e3 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 7 Mar 2023 14:29:28 +0100
+Subject: clk: Print an info line before disabling unused clocks
+
+From: Konrad Dybcio <konrad.dybcio@linaro.org>
+
+[ Upstream commit 12ca59b91d04df32e41be5a52f0cabba912c11de ]
+
+Currently, the regulator framework informs us before calling into
+their unused cleanup paths, which eases at least some debugging. The
+same could be beneficial for clocks, so that random shutdowns shortly
+after most initcalls are done can be less of a guess.
+
+Add a pr_info before disabling unused clocks to do so.
+
+Reviewed-by: Marijn Suijten <marijn.suijten@somainline.org>
+Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org>
+Link: https://lore.kernel.org/r/20230307132928.3887737-1-konrad.dybcio@linaro.org
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Stable-dep-of: e581cf5d2162 ("clk: Get runtime PM before walking tree during disable_unused")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 52877fb06e181..de1235d7b659a 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -1330,6 +1330,8 @@ static int __init clk_disable_unused(void)
+ return 0;
+ }
+
++ pr_info("clk: Disabling unused clocks\n");
++
+ clk_prepare_lock();
+
+ hlist_for_each_entry(core, &clk_root_list, child_node)
+--
+2.43.0
+
diff --git a/queue-5.15/clk-remove-extra-empty-line.patch b/queue-5.15/clk-remove-extra-empty-line.patch
new file mode 100644
index 0000000000..0e32459ede
--- /dev/null
+++ b/queue-5.15/clk-remove-extra-empty-line.patch
@@ -0,0 +1,35 @@
+From 5c4745d0b8633a5386b2804ecaca7f1a02fce616 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 30 Jun 2022 18:12:04 +0300
+Subject: clk: remove extra empty line
+
+From: Claudiu Beznea <claudiu.beznea@microchip.com>
+
+[ Upstream commit 79806d338829b2bf903480428d8ce5aab8e2d24b ]
+
+Remove extra empty line.
+
+Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
+Link: https://lore.kernel.org/r/20220630151205.3935560-1-claudiu.beznea@microchip.com
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Stable-dep-of: e581cf5d2162 ("clk: Get runtime PM before walking tree during disable_unused")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 0d93537d46c34..52877fb06e181 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3653,7 +3653,6 @@ static int __clk_core_init(struct clk_core *core)
+
+ clk_core_reparent_orphans_nolock();
+
+-
+ kref_init(&core->ref);
+ out:
+ clk_pm_runtime_put(core);
+--
+2.43.0
+
diff --git a/queue-5.15/clk-remove-prepare_lock-hold-assertion-in-__clk_rele.patch b/queue-5.15/clk-remove-prepare_lock-hold-assertion-in-__clk_rele.patch
new file mode 100644
index 0000000000..500ebd524f
--- /dev/null
+++ b/queue-5.15/clk-remove-prepare_lock-hold-assertion-in-__clk_rele.patch
@@ -0,0 +1,44 @@
+From 7f548c4365194a6e5374b1b578fcd8dd50615c67 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 25 Mar 2024 11:41:55 -0700
+Subject: clk: Remove prepare_lock hold assertion in __clk_release()
+
+From: Stephen Boyd <sboyd@kernel.org>
+
+[ Upstream commit 8358a76cfb47c9a5af627a0c4e7168aa14fa25f6 ]
+
+Removing this assertion lets us move the kref_put() call outside the
+prepare_lock section. We don't need to hold the prepare_lock here to
+free memory and destroy the clk_core structure. We've already unlinked
+the clk from the clk tree and by the time the release function runs
+nothing holds a reference to the clk_core anymore so anything with the
+pointer can't access the memory that's being freed anyway. Way back in
+commit 496eadf821c2 ("clk: Use lockdep asserts to find missing hold of
+prepare_lock") we didn't need to have this assertion either.
+
+Fixes: 496eadf821c2 ("clk: Use lockdep asserts to find missing hold of prepare_lock")
+Cc: Krzysztof Kozlowski <krzk@kernel.org>
+Reviewed-by: Douglas Anderson <dianders@chromium.org>
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Link: https://lore.kernel.org/r/20240325184204.745706-2-sboyd@kernel.org
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 84397af4fb336..acbe917cbe775 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -4047,8 +4047,6 @@ static void __clk_release(struct kref *ref)
+ {
+ struct clk_core *core = container_of(ref, struct clk_core, ref);
+
+- lockdep_assert_held(&prepare_lock);
+-
+ clk_core_free_parent_map(core);
+ kfree_const(core->name);
+ kfree(core);
+--
+2.43.0
+
diff --git a/queue-5.15/series b/queue-5.15/series
index f2e28e91e0..ffb314361b 100644
--- a/queue-5.15/series
+++ b/queue-5.15/series
@@ -31,3 +31,11 @@ s390-qdio-handle-deferred-cc1.patch
s390-cio-fix-race-condition-during-online-processing.patch
drm-nv04-fix-out-of-bounds-access.patch
drm-panel-visionox-rm69299-don-t-unregister-dsi-devi.patch
+clk-remove-prepare_lock-hold-assertion-in-__clk_rele.patch
+clk-mark-all_lists-as-const.patch
+clk-remove-extra-empty-line.patch
+clk-print-an-info-line-before-disabling-unused-clock.patch
+clk-initialize-struct-clk_core-kref-earlier.patch
+clk-get-runtime-pm-before-walking-tree-during-disabl.patch
+x86-bugs-fix-bhi-retpoline-check.patch
+x86-cpufeatures-fix-dependencies-for-gfni-vaes-and-v.patch
diff --git a/queue-5.15/x86-bugs-fix-bhi-retpoline-check.patch b/queue-5.15/x86-bugs-fix-bhi-retpoline-check.patch
new file mode 100644
index 0000000000..40ecc8b3e8
--- /dev/null
+++ b/queue-5.15/x86-bugs-fix-bhi-retpoline-check.patch
@@ -0,0 +1,63 @@
+From 263f6744e99cd39b42e8a9bb36f2ff5ef6615b08 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 12 Apr 2024 11:10:33 -0700
+Subject: x86/bugs: Fix BHI retpoline check
+
+From: Josh Poimboeuf <jpoimboe@kernel.org>
+
+[ Upstream commit 69129794d94c544810e68b2b4eaa7e44063f9bf2 ]
+
+Confusingly, X86_FEATURE_RETPOLINE doesn't mean retpolines are enabled,
+as it also includes the original "AMD retpoline" which isn't a retpoline
+at all.
+
+Also replace cpu_feature_enabled() with boot_cpu_has() because this is
+before alternatives are patched and cpu_feature_enabled()'s fallback
+path is slower than plain old boot_cpu_has().
+
+Fixes: ec9404e40e8f ("x86/bhi: Add BHI mitigation knob")
+Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Link: https://lore.kernel.org/r/ad3807424a3953f0323c011a643405619f2a4927.1712944776.git.jpoimboe@kernel.org
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/x86/kernel/cpu/bugs.c | 11 +++++++----
+ 1 file changed, 7 insertions(+), 4 deletions(-)
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index b30b32b288dd4..247545b57dff6 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1629,7 +1629,8 @@ static void __init bhi_select_mitigation(void)
+ return;
+
+ /* Retpoline mitigates against BHI unless the CPU has RRSBA behavior */
+- if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
++ if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
++ !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE)) {
+ spec_ctrl_disable_kernel_rrsba();
+ if (rrsba_disabled)
+ return;
+@@ -2783,11 +2784,13 @@ static const char *spectre_bhi_state(void)
+ {
+ if (!boot_cpu_has_bug(X86_BUG_BHI))
+ return "; BHI: Not affected";
+- else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_HW))
++ else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_HW))
+ return "; BHI: BHI_DIS_S";
+- else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP))
++ else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP))
+ return "; BHI: SW loop, KVM: SW loop";
+- else if (boot_cpu_has(X86_FEATURE_RETPOLINE) && rrsba_disabled)
++ else if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
++ !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE) &&
++ rrsba_disabled)
+ return "; BHI: Retpoline";
+ else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT))
+ return "; BHI: Vulnerable, KVM: SW loop";
+--
+2.43.0
+
diff --git a/queue-5.15/x86-cpufeatures-fix-dependencies-for-gfni-vaes-and-v.patch b/queue-5.15/x86-cpufeatures-fix-dependencies-for-gfni-vaes-and-v.patch
new file mode 100644
index 0000000000..f58208be74
--- /dev/null
+++ b/queue-5.15/x86-cpufeatures-fix-dependencies-for-gfni-vaes-and-v.patch
@@ -0,0 +1,59 @@
+From 0c7614ab22357a0223ccbc5471ebf0f111807e02 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 16 Apr 2024 23:04:34 -0700
+Subject: x86/cpufeatures: Fix dependencies for GFNI, VAES, and VPCLMULQDQ
+
+From: Eric Biggers <ebiggers@google.com>
+
+[ Upstream commit 9543f6e26634537997b6e909c20911b7bf4876de ]
+
+Fix cpuid_deps[] to list the correct dependencies for GFNI, VAES, and
+VPCLMULQDQ. These features don't depend on AVX512, and there exist CPUs
+that support these features but not AVX512. GFNI actually doesn't even
+depend on AVX.
+
+This prevents GFNI from being unnecessarily disabled if AVX is disabled
+to mitigate the GDS vulnerability.
+
+This also prevents all three features from being unnecessarily disabled
+if AVX512VL (or its dependency AVX512F) were to be disabled, but it
+looks like there isn't any case where this happens anyway.
+
+Fixes: c128dbfa0f87 ("x86/cpufeatures: Enable new SSE/AVX/AVX512 CPU features")
+Signed-off-by: Eric Biggers <ebiggers@google.com>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
+Link: https://lore.kernel.org/r/20240417060434.47101-1-ebiggers@kernel.org
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/x86/kernel/cpu/cpuid-deps.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c
+index defda61f372df..2161676577f2b 100644
+--- a/arch/x86/kernel/cpu/cpuid-deps.c
++++ b/arch/x86/kernel/cpu/cpuid-deps.c
+@@ -44,7 +44,10 @@ static const struct cpuid_dep cpuid_deps[] = {
+ { X86_FEATURE_F16C, X86_FEATURE_XMM2, },
+ { X86_FEATURE_AES, X86_FEATURE_XMM2 },
+ { X86_FEATURE_SHA_NI, X86_FEATURE_XMM2 },
++ { X86_FEATURE_GFNI, X86_FEATURE_XMM2 },
+ { X86_FEATURE_FMA, X86_FEATURE_AVX },
++ { X86_FEATURE_VAES, X86_FEATURE_AVX },
++ { X86_FEATURE_VPCLMULQDQ, X86_FEATURE_AVX },
+ { X86_FEATURE_AVX2, X86_FEATURE_AVX, },
+ { X86_FEATURE_AVX512F, X86_FEATURE_AVX, },
+ { X86_FEATURE_AVX512IFMA, X86_FEATURE_AVX512F },
+@@ -56,9 +59,6 @@ static const struct cpuid_dep cpuid_deps[] = {
+ { X86_FEATURE_AVX512VL, X86_FEATURE_AVX512F },
+ { X86_FEATURE_AVX512VBMI, X86_FEATURE_AVX512F },
+ { X86_FEATURE_AVX512_VBMI2, X86_FEATURE_AVX512VL },
+- { X86_FEATURE_GFNI, X86_FEATURE_AVX512VL },
+- { X86_FEATURE_VAES, X86_FEATURE_AVX512VL },
+- { X86_FEATURE_VPCLMULQDQ, X86_FEATURE_AVX512VL },
+ { X86_FEATURE_AVX512_VNNI, X86_FEATURE_AVX512VL },
+ { X86_FEATURE_AVX512_BITALG, X86_FEATURE_AVX512VL },
+ { X86_FEATURE_AVX512_4VNNIW, X86_FEATURE_AVX512F },
+--
+2.43.0
+