From: Manfred Spraul Brian spotted a stupid bug in the slab initialization: If multiple objects fit into one cacheline, then the allocator ignores SLAB_HWCACHE_ALIGN and squeezes the objects into the same cacheline. The implementation contains an off by one error and thus doesn't work correctly: For Athlon optimized kernels, the 32-byte slab uses 64 byte of memory. mm/slab.c | 2 +- 1 files changed, 1 insertion(+), 1 deletion(-) diff -puN mm/slab.c~slab-off-by-one-fix mm/slab.c --- 25/mm/slab.c~slab-off-by-one-fix 2003-03-23 14:14:33.000000000 -0800 +++ 25-akpm/mm/slab.c 2003-03-23 14:14:33.000000000 -0800 @@ -1035,7 +1035,7 @@ kmem_cache_create (const char *name, siz if (flags & SLAB_HWCACHE_ALIGN) { /* Need to adjust size so that objs are cache aligned. */ /* Small obj size, can get at least two per cache line. */ - while (size < align/2) + while (size <= align/2) align /= 2; size = (size+align-1)&(~(align-1)); } _