From: To make spinlock/rwlock initialization consistent all over the kernel, this patch converts explicit lock-initializers into spin_lock_init() and rwlock_init() calls. Currently, spinlocks and rwlocks are initialized in two different ways: lock = SPIN_LOCK_UNLOCKED spin_lock_init(&lock) rwlock = RW_LOCK_UNLOCKED rwlock_init(&rwlock) this patch converts all explicit lock initializations to spin_lock_init() or rwlock_init(). (Besides consistency this also helps automatic lock validators and debugging code.) The conversion was done with a script, it was verified manually and it was reviewed, compiled and tested as far as possible on x86, ARM, PPC. There is no runtime overhead or actual code change resulting out of this patch, because spin_lock_init() and rwlock_init() are macros and are thus equivalent to the explicit initialization method. That's the second batch of the unifying patches. Signed-off-by: Thomas Gleixner Acked-by: Ingo Molnar Acked-by: "Luck, Tony" Signed-off-by: Andrew Morton --- 25-akpm/arch/ia64/kernel/unwind.c | 2 +- 1 files changed, 1 insertion(+), 1 deletion(-) diff -puN arch/ia64/kernel/unwind.c~lock-initializer-unifying-batch-2-ia64 arch/ia64/kernel/unwind.c --- 25/arch/ia64/kernel/unwind.c~lock-initializer-unifying-batch-2-ia64 2004-11-17 20:47:10.284672624 -0800 +++ 25-akpm/arch/ia64/kernel/unwind.c 2004-11-17 20:47:10.289671864 -0800 @@ -2256,7 +2256,7 @@ unw_init (void) if (i > 0) unw.cache[i].lru_chain = (i - 1); unw.cache[i].coll_chain = -1; - unw.cache[i].lock = RW_LOCK_UNLOCKED; + rwlock_init(&unw.cache[i].lock); } unw.lru_head = UNW_CACHE_SIZE - 1; unw.lru_tail = 0; _