aboutsummaryrefslogtreecommitdiffstats
path: root/mm/slub.c
diff options
context:
space:
mode:
authorChengming Zhou <zhouchengming@bytedance.com>2023-11-02 03:23:30 +0000
committerVlastimil Babka <vbabka@suse.cz>2023-12-05 10:38:27 +0100
commit31bda717d7777b8b6cf542af2730651ad6bb4839 (patch)
tree4dc5dece86d2ba5fbb4899ad0c9f4cdc5d8a24ed /mm/slub.c
parent21316fdc799932ff43fa00a6d6a45b16dbd77844 (diff)
downloadlinux-31bda717d7777b8b6cf542af2730651ad6bb4839.tar.gz
slub: Update frozen slabs documentations in the source
The current updated scheme (which this series implemented) is: - node partial slabs: PG_Workingset && !frozen - cpu partial slabs: !PG_Workingset && !frozen - cpu slabs: !PG_Workingset && frozen - full slabs: !PG_Workingset && !frozen The most important change is that "frozen" bit is not set for the cpu partial slabs anymore, __slab_free() will grab node list_lock then check by !PG_Workingset that it's not on a node partial list. And the "frozen" bit is still kept for the cpu slabs for performance, since we don't need to grab node list_lock to check whether the PG_Workingset is set or not if the "frozen" bit is set in __slab_free(). Update related documentations and comments in the source. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Acked-by: Christoph Lameter (Ampere) <cl@linux.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c22
1 files changed, 18 insertions, 4 deletions
diff --git a/mm/slub.c b/mm/slub.c
index fe5fcf074dfdc..4fc203a4fa03e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -76,13 +76,28 @@
*
* Frozen slabs
*
- * If a slab is frozen then it is exempt from list management. It is not
- * on any list except per cpu partial list. The processor that froze the
+ * If a slab is frozen then it is exempt from list management. It is
+ * the cpu slab which is actively allocated from by the processor that
+ * froze it and it is not on any list. The processor that froze the
* slab is the one who can perform list operations on the slab. Other
* processors may put objects onto the freelist but the processor that
* froze the slab is the only one that can retrieve the objects from the
* slab's freelist.
*
+ * CPU partial slabs
+ *
+ * The partially empty slabs cached on the CPU partial list are used
+ * for performance reasons, which speeds up the allocation process.
+ * These slabs are not frozen, but are also exempt from list management,
+ * by clearing the PG_workingset flag when moving out of the node
+ * partial list. Please see __slab_free() for more details.
+ *
+ * To sum up, the current scheme is:
+ * - node partial slab: PG_Workingset && !frozen
+ * - cpu partial slab: !PG_Workingset && !frozen
+ * - cpu slab: !PG_Workingset && frozen
+ * - full slab: !PG_Workingset && !frozen
+ *
* list_lock
*
* The list_lock protects the partial and full list on each node and
@@ -2617,8 +2632,7 @@ static void put_partials_cpu(struct kmem_cache *s,
}
/*
- * Put a slab that was just frozen (in __slab_free|get_partial_node) into a
- * partial slab slot if available.
+ * Put a slab into a partial slab slot if available.
*
* If we did not find a slot then simply move all the partials to the
* per node partial list.