sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget%/translations/zh_CN/locking/locktypesmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget%/translations/zh_TW/locking/locktypesmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget%/translations/it_IT/locking/locktypesmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget%/translations/ja_JP/locking/locktypesmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget%/translations/ko_KR/locking/locktypesmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget%/translations/sp_SP/locking/locktypesmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhcomment)}(h SPDX-License-Identifier: GPL-2.0h]h SPDX-License-Identifier: GPL-2.0}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhhh?/var/lib/git/docbuild/linux/Documentation/locking/locktypes.rsthKubhtarget)}(h.. _kernel_hacking_locktypes:h]h}(h]h ]h"]h$]h&]refidkernel-hacking-locktypesuh1hhKhhhhhhubhsection)}(hhh](htitle)}(hLock types and their rulesh]hLock types and their rules}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Introductionh]h Introduction}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhK ubh paragraph)}(h_The kernel provides a variety of locking primitives which can be divided into three categories:h]h_The kernel provides a variety of locking primitives which can be divided into three categories:}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hhhhubh block_quote)}(h4- Sleeping locks - CPU local locks - Spinning locks h]h bullet_list)}(hhh](h list_item)}(hSleeping locksh]h)}(hjh]hSleeping locks}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hCPU local locksh]h)}(hjh]hCPU local locks}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hSpinning locks h]h)}(hSpinning locksh]hSpinning locks}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj3ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]bullet-uh1hhhhKhhubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hThis document conceptually describes these lock types and provides rules for their nesting, including the rules for use under PREEMPT_RT.h]hThis document conceptually describes these lock types and provides rules for their nesting, including the rules for use under PREEMPT_RT.}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubeh}(h] introductionah ]h"] introductionah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(hLock categoriesh]hLock categories}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjohhhhhKubh)}(hhh](h)}(hSleeping locksh]hSleeping locks}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(h@Sleeping locks can only be acquired in preemptible task context.h]h@Sleeping locks can only be acquired in preemptible task context.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXMAlthough implementations allow try_lock() from other contexts, it is necessary to carefully evaluate the safety of unlock() as well as of try_lock(). Furthermore, it is also necessary to evaluate the debugging versions of these primitives. In short, don't acquire sleeping locks from other contexts unless there is no other option.h]hXOAlthough implementations allow try_lock() from other contexts, it is necessary to carefully evaluate the safety of unlock() as well as of try_lock(). Furthermore, it is also necessary to evaluate the debugging versions of these primitives. In short, don’t acquire sleeping locks from other contexts unless there is no other option.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hSleeping lock types:h]hSleeping lock types:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK%hjhhubh)}(hO- mutex - rt_mutex - semaphore - rw_semaphore - ww_mutex - percpu_rw_semaphore h]h)}(hhh](j)}(hmutexh]h)}(hjh]hmutex}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK'hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hrt_mutexh]h)}(hjh]hrt_mutex}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK(hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h semaphoreh]h)}(hjh]h semaphore}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK)hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h rw_semaphoreh]h)}(hj h]h rw_semaphore}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK*hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hww_mutexh]h)}(hj h]hww_mutex}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK+hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hpercpu_rw_semaphore h]h)}(hpercpu_rw_semaphoreh]hpercpu_rw_semaphore}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK,hj5ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhK'hjubah}(h]h ]h"]h$]h&]uh1hhhhK'hjhhubh)}(hHOn PREEMPT_RT kernels, these lock types are converted to sleeping locks:h]hHOn PREEMPT_RT kernels, these lock types are converted to sleeping locks:}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK.hjhhubh)}(h&- local_lock - spinlock_t - rwlock_t h]h)}(hhh](j)}(h local_lockh]h)}(hjph]h local_lock}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK0hjnubah}(h]h ]h"]h$]h&]uh1jhjkubj)}(h spinlock_th]h)}(hjh]h spinlock_t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK1hjubah}(h]h ]h"]h$]h&]uh1jhjkubj)}(h rwlock_t h]h)}(hrwlock_th]hrwlock_t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK2hjubah}(h]h ]h"]h$]h&]uh1jhjkubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhK0hjgubah}(h]h ]h"]h$]h&]uh1hhhhK0hjhhubeh}(h]sleeping-locksah ]h"]sleeping locksah$]h&]uh1hhjohhhhhKubh)}(hhh](h)}(hCPU local locksh]hCPU local locks}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK6ubh)}(h - local_lock h]h)}(hhh]j)}(h local_lock h]h)}(h local_lockh]h local_lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK8hjubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]jQjRuh1hhhhK8hjubah}(h]h ]h"]h$]h&]uh1hhhhK8hjhhubh)}(hXOn non-PREEMPT_RT kernels, local_lock functions are wrappers around preemption and interrupt disabling primitives. Contrary to other locking mechanisms, disabling preemption or interrupts are pure CPU local concurrency control mechanisms and not suited for inter-CPU concurrency control.h]hXOn non-PREEMPT_RT kernels, local_lock functions are wrappers around preemption and interrupt disabling primitives. Contrary to other locking mechanisms, disabling preemption or interrupts are pure CPU local concurrency control mechanisms and not suited for inter-CPU concurrency control.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK:hjhhubeh}(h]cpu-local-locksah ]h"]cpu local locksah$]h&]uh1hhjohhhhhK6ubh)}(hhh](h)}(hSpinning locksh]hSpinning locks}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKBubh)}(h!- raw_spinlock_t - bit spinlocks h]h)}(hhh](j)}(hraw_spinlock_th]h)}(hj4h]hraw_spinlock_t}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKDhj2ubah}(h]h ]h"]h$]h&]uh1jhj/ubj)}(hbit spinlocks h]h)}(h bit spinlocksh]h bit spinlocks}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKEhjIubah}(h]h ]h"]h$]h&]uh1jhj/ubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhKDhj+ubah}(h]h ]h"]h$]h&]uh1hhhhKDhjhhubh)}(hDOn non-PREEMPT_RT kernels, these lock types are also spinning locks:h]hDOn non-PREEMPT_RT kernels, these lock types are also spinning locks:}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKGhjhhubh)}(h- spinlock_t - rwlock_t h]h)}(hhh](j)}(h spinlock_th]h)}(hjh]h spinlock_t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKIhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h rwlock_t h]h)}(hrwlock_th]hrwlock_t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKJhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhKIhj{ubah}(h]h ]h"]h$]h&]uh1hhhhKIhjhhubh)}(hSpinning locks implicitly disable preemption and the lock / unlock functions can have suffixes which apply further protections:h]hSpinning locks implicitly disable preemption and the lock / unlock functions can have suffixes which apply further protections:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKLhjhhubh)}(hXU=================== ==================================================== _bh() Disable / enable bottom halves (soft interrupts) _irq() Disable / enable interrupts _irqsave/restore() Save and disable / restore interrupt disabled state =================== ==================================================== h]htable)}(hhh]htgroup)}(hhh](hcolspec)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK4uh1jhjubhtbody)}(hhh](hrow)}(hhh](hentry)}(hhh]h)}(h_bh()h]h_bh()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKPhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h0Disable / enable bottom halves (soft interrupts)h]h0Disable / enable bottom halves (soft interrupts)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKPhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(h_irq()h]h_irq()}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKQhj2ubah}(h]h ]h"]h$]h&]uh1jhj/ubj)}(hhh]h)}(hDisable / enable interruptsh]hDisable / enable interrupts}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKQhjIubah}(h]h ]h"]h$]h&]uh1jhj/ubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(h_irqsave/restore()h]h_irqsave/restore()}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKRhjiubah}(h]h ]h"]h$]h&]uh1jhjfubj)}(hhh]h)}(h3Save and disable / restore interrupt disabled stateh]h3Save and disable / restore interrupt disabled state}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKRhjubah}(h]h ]h"]h$]h&]uh1jhjfubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]colsKuh1jhjubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhKOhjhhubeh}(h]spinning-locksah ]h"]spinning locksah$]h&]uh1hhjohhhhhKBubeh}(h]lock-categoriesah ]h"]lock categoriesah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hOwner semanticsh]hOwner semantics}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKWubh)}(hLThe aforementioned lock types except semaphores have strict owner semantics:h]hLThe aforementioned lock types except semaphores have strict owner semantics:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKYhjhhubh)}(h;The context (task) that acquired the lock must release it. h]h)}(h:The context (task) that acquired the lock must release it.h]h:The context (task) that acquired the lock must release it.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK\hjubah}(h]h ]h"]h$]h&]uh1hhhhK\hjhhubh)}(hRrw_semaphores have a special interface which allows non-owner release for readers.h]hRrw_semaphores have a special interface which allows non-owner release for readers.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK^hjhhubeh}(h]owner-semanticsah ]h"]owner semanticsah$]h&]uh1hhhhhhhhKWubh)}(hhh](h)}(hrtmutexh]hrtmutex}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKcubh)}(hBRT-mutexes are mutexes with support for priority inheritance (PI).h]hBRT-mutexes are mutexes with support for priority inheritance (PI).}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKehjhhubh)}(h_PI has limitations on non-PREEMPT_RT kernels due to preemption and interrupt disabled sections.h]h_PI has limitations on non-PREEMPT_RT kernels due to preemption and interrupt disabled sections.}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKghjhhubh)}(hXQPI clearly cannot preempt preemption-disabled or interrupt-disabled regions of code, even on PREEMPT_RT kernels. Instead, PREEMPT_RT kernels execute most such regions of code in preemptible task context, especially interrupt handlers and soft interrupts. This conversion allows spinlock_t and rwlock_t to be implemented via RT-mutexes.h]hXQPI clearly cannot preempt preemption-disabled or interrupt-disabled regions of code, even on PREEMPT_RT kernels. Instead, PREEMPT_RT kernels execute most such regions of code in preemptible task context, especially interrupt handlers and soft interrupts. This conversion allows spinlock_t and rwlock_t to be implemented via RT-mutexes.}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKjhjhhubeh}(h]rtmutexah ]h"]rtmutexah$]h&]uh1hhhhhhhhKcubh)}(hhh](h)}(h semaphoreh]h semaphore}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjVhhhhhKrubh)}(h1semaphore is a counting semaphore implementation.h]h1semaphore is a counting semaphore implementation.}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKthjVhhubh)}(hSemaphores are often used for both serialization and waiting, but new use cases should instead use separate serialization and wait mechanisms, such as mutexes and completions.h]hSemaphores are often used for both serialization and waiting, but new use cases should instead use separate serialization and wait mechanisms, such as mutexes and completions.}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKvhjVhhubh)}(hhh](h)}(hsemaphores and PREEMPT_RTh]hsemaphores and PREEMPT_RT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK{ubh)}(hX8PREEMPT_RT does not change the semaphore implementation because counting semaphores have no concept of owners, thus preventing PREEMPT_RT from providing priority inheritance for semaphores. After all, an unknown owner cannot be boosted. As a consequence, blocking on semaphores can result in priority inversion.h]hX8PREEMPT_RT does not change the semaphore implementation because counting semaphores have no concept of owners, thus preventing PREEMPT_RT from providing priority inheritance for semaphores. After all, an unknown owner cannot be boosted. As a consequence, blocking on semaphores can result in priority inversion.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK}hjhhubeh}(h]semaphores-and-preempt-rtah ]h"]semaphores and preempt_rtah$]h&]uh1hhjVhhhhhK{ubeh}(h] semaphoreah ]h"] semaphoreah$]h&]uh1hhhhhhhhKrubh)}(hhh](h)}(h rw_semaphoreh]h rw_semaphore}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hDrw_semaphore is a multiple readers and single writer lock mechanism.h]hDrw_semaphore is a multiple readers and single writer lock mechanism.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXOn non-PREEMPT_RT kernels the implementation is fair, thus preventing writer starvation.h]hXOn non-PREEMPT_RT kernels the implementation is fair, thus preventing writer starvation.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hrw_semaphore complies by default with the strict owner semantics, but there exist special-purpose interfaces that allow non-owner release for readers. These interfaces work independent of the kernel configuration.h]hrw_semaphore complies by default with the strict owner semantics, but there exist special-purpose interfaces that allow non-owner release for readers. These interfaces work independent of the kernel configuration.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hhh](h)}(hrw_semaphore and PREEMPT_RTh]hrw_semaphore and PREEMPT_RT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hlPREEMPT_RT kernels map rw_semaphore to a separate rt_mutex-based implementation, thus changing the fairness:h]hlPREEMPT_RT kernels map rw_semaphore to a separate rt_mutex-based implementation, thus changing the fairness:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXBecause an rw_semaphore writer cannot grant its priority to multiple readers, a preempted low-priority reader will continue holding its lock, thus starving even high-priority writers. In contrast, because readers can grant their priority to a writer, a preempted low-priority writer will have its priority boosted until it releases the lock, thus preventing that writer from starving readers. h]h)}(hXBecause an rw_semaphore writer cannot grant its priority to multiple readers, a preempted low-priority reader will continue holding its lock, thus starving even high-priority writers. In contrast, because readers can grant their priority to a writer, a preempted low-priority writer will have its priority boosted until it releases the lock, thus preventing that writer from starving readers.h]hXBecause an rw_semaphore writer cannot grant its priority to multiple readers, a preempted low-priority reader will continue holding its lock, thus starving even high-priority writers. In contrast, because readers can grant their priority to a writer, a preempted low-priority writer will have its priority boosted until it releases the lock, thus preventing that writer from starving readers.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]rw-semaphore-and-preempt-rtah ]h"]rw_semaphore and preempt_rtah$]h&]uh1hhjhhhhhKubeh}(h] rw-semaphoreah ]h"] rw_semaphoreah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h local_lockh]h local_lock}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hhhhhKubh)}(hqlocal_lock provides a named scope to critical sections which are protected by disabling preemption or interrupts.h]hqlocal_lock provides a named scope to critical sections which are protected by disabling preemption or interrupts.}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubh)}(hvOn non-PREEMPT_RT kernels local_lock operations map to the preemption and interrupt disabling and enabling primitives:h]hvOn non-PREEMPT_RT kernels local_lock operations map to the preemption and interrupt disabling and enabling primitives:}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubh)}(hX=============================== ====================== local_lock(&llock) preempt_disable() local_unlock(&llock) preempt_enable() local_lock_irq(&llock) local_irq_disable() local_unlock_irq(&llock) local_irq_enable() local_lock_irqsave(&llock) local_irq_save() local_unlock_irqrestore(&llock) local_irq_restore() =============================== ====================== h]j)}(hhh]j)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjhubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjhubj)}(hhh](j)}(hhh](j)}(hhh]h)}(hlocal_lock(&llock)h]hlocal_lock(&llock)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hpreempt_disable()h]hpreempt_disable()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hlocal_unlock(&llock)h]hlocal_unlock(&llock)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hpreempt_enable()h]hpreempt_enable()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hlocal_lock_irq(&llock)h]hlocal_lock_irq(&llock)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hlocal_irq_disable()h]hlocal_irq_disable()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hlocal_unlock_irq(&llock)h]hlocal_unlock_irq(&llock)}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj*ubah}(h]h ]h"]h$]h&]uh1jhj'ubj)}(hhh]h)}(hlocal_irq_enable()h]hlocal_irq_enable()}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjAubah}(h]h ]h"]h$]h&]uh1jhj'ubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hlocal_lock_irqsave(&llock)h]hlocal_lock_irqsave(&llock)}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjaubah}(h]h ]h"]h$]h&]uh1jhj^ubj)}(hhh]h)}(hlocal_irq_save()h]hlocal_irq_save()}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjxubah}(h]h ]h"]h$]h&]uh1jhj^ubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hlocal_unlock_irqrestore(&llock)h]hlocal_unlock_irqrestore(&llock)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hlocal_irq_restore()h]hlocal_irq_restore()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjhubeh}(h]h ]h"]h$]h&]colsKuh1jhjeubah}(h]h ]h"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubh)}(hMThe named scope of local_lock has two advantages over the regular primitives:h]hMThe named scope of local_lock has two advantages over the regular primitives:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubh)}(hX- The lock name allows static analysis and is also a clear documentation of the protection scope while the regular primitives are scopeless and opaque. - If lockdep is enabled the local_lock gains a lockmap which allows to validate the correctness of the protection. This can detect cases where e.g. a function using preempt_disable() as protection mechanism is invoked from interrupt or soft-interrupt context. Aside of that lockdep_assert_held(&llock) works as with any other locking primitive. h]h)}(hhh](j)}(hThe lock name allows static analysis and is also a clear documentation of the protection scope while the regular primitives are scopeless and opaque. h]h)}(hThe lock name allows static analysis and is also a clear documentation of the protection scope while the regular primitives are scopeless and opaque.h]hThe lock name allows static analysis and is also a clear documentation of the protection scope while the regular primitives are scopeless and opaque.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hXWIf lockdep is enabled the local_lock gains a lockmap which allows to validate the correctness of the protection. This can detect cases where e.g. a function using preempt_disable() as protection mechanism is invoked from interrupt or soft-interrupt context. Aside of that lockdep_assert_held(&llock) works as with any other locking primitive. h]h)}(hXVIf lockdep is enabled the local_lock gains a lockmap which allows to validate the correctness of the protection. This can detect cases where e.g. a function using preempt_disable() as protection mechanism is invoked from interrupt or soft-interrupt context. Aside of that lockdep_assert_held(&llock) works as with any other locking primitive.h]hXVIf lockdep is enabled the local_lock gains a lockmap which allows to validate the correctness of the protection. This can detect cases where e.g. a function using preempt_disable() as protection mechanism is invoked from interrupt or soft-interrupt context. Aside of that lockdep_assert_held(&llock) works as with any other locking primitive.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubh)}(hhh](h)}(hlocal_lock and PREEMPT_RTh]hlocal_lock and PREEMPT_RT}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj6hhhhhKubh)}(hSPREEMPT_RT kernels map local_lock to a per-CPU spinlock_t, thus changing semantics:h]hSPREEMPT_RT kernels map local_lock to a per-CPU spinlock_t, thus changing semantics:}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj6hhubh)}(h3- All spinlock_t changes also apply to local_lock. h]h)}(hhh]j)}(h1All spinlock_t changes also apply to local_lock. h]h)}(h0All spinlock_t changes also apply to local_lock.h]h0All spinlock_t changes also apply to local_lock.}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj\ubah}(h]h ]h"]h$]h&]uh1jhjYubah}(h]h ]h"]h$]h&]jQjRuh1hhhhKhjUubah}(h]h ]h"]h$]h&]uh1hhhhKhj6hhubeh}(h]local-lock-and-preempt-rtah ]h"]local_lock and preempt_rtah$]h&]uh1hhj4hhhhhKubh)}(hhh](h)}(hlocal_lock usageh]hlocal_lock usage}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hlocal_lock should be used in situations where disabling preemption or interrupts is the appropriate form of concurrency control to protect per-CPU data structures on a non PREEMPT_RT kernel.h]hlocal_lock should be used in situations where disabling preemption or interrupts is the appropriate form of concurrency control to protect per-CPU data structures on a non PREEMPT_RT kernel.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hlocal_lock is not suitable to protect against preemption or interrupts on a PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.h]hlocal_lock is not suitable to protect against preemption or interrupts on a PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]local-lock-usageah ]h"]local_lock usageah$]h&]uh1hhj4hhhhhKubeh}(h] local-lockah ]h"] local_lockah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hraw_spinlock_t and spinlock_th]hraw_spinlock_t and spinlock_t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hraw_spinlock_th]hraw_spinlock_t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXraw_spinlock_t is a strict spinning lock implementation in all kernels, including PREEMPT_RT kernels. Use raw_spinlock_t only in real critical core code, low-level interrupt handling and places where disabling preemption or interrupts is required, for example, to safely access hardware state. raw_spinlock_t can sometimes also be used when the critical section is tiny, thus avoiding RT-mutex overhead.h]hXraw_spinlock_t is a strict spinning lock implementation in all kernels, including PREEMPT_RT kernels. Use raw_spinlock_t only in real critical core code, low-level interrupt handling and places where disabling preemption or interrupts is required, for example, to safely access hardware state. raw_spinlock_t can sometimes also be used when the critical section is tiny, thus avoiding RT-mutex overhead.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]raw-spinlock-tah ]h"]raw_spinlock_tah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(h spinlock_th]h spinlock_t}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(h@The semantics of spinlock_t change with the state of PREEMPT_RT.h]h@The semantics of spinlock_t change with the state of PREEMPT_RT.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(heOn a non-PREEMPT_RT kernel spinlock_t is mapped to raw_spinlock_t and has exactly the same semantics.h]heOn a non-PREEMPT_RT kernel spinlock_t is mapped to raw_spinlock_t and has exactly the same semantics.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] spinlock-tah ]h"] spinlock_tah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hspinlock_t and PREEMPT_RTh]hspinlock_t and PREEMPT_RT}(hj5 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj2 hhhhhKubh)}(hwOn a PREEMPT_RT kernel spinlock_t is mapped to a separate implementation based on rt_mutex which changes the semantics:h]hwOn a PREEMPT_RT kernel spinlock_t is mapped to a separate implementation based on rt_mutex which changes the semantics:}(hjC hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj2 hhubh)}(hX - Preemption is not disabled. - The hard interrupt related suffixes for spin_lock / spin_unlock operations (_irq, _irqsave / _irqrestore) do not affect the CPU's interrupt disabled state. - The soft interrupt related suffix (_bh()) still disables softirq handlers. Non-PREEMPT_RT kernels disable preemption to get this effect. PREEMPT_RT kernels use a per-CPU lock for serialization which keeps preemption enabled. The lock disables softirq handlers and also prevents reentrancy due to task preemption. h]h)}(hhh](j)}(hPreemption is not disabled. h]h)}(hPreemption is not disabled.h]hPreemption is not disabled.}(hj\ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjX ubah}(h]h ]h"]h$]h&]uh1jhjU ubj)}(hThe hard interrupt related suffixes for spin_lock / spin_unlock operations (_irq, _irqsave / _irqrestore) do not affect the CPU's interrupt disabled state. h]h)}(hThe hard interrupt related suffixes for spin_lock / spin_unlock operations (_irq, _irqsave / _irqrestore) do not affect the CPU's interrupt disabled state.h]hThe hard interrupt related suffixes for spin_lock / spin_unlock operations (_irq, _irqsave / _irqrestore) do not affect the CPU’s interrupt disabled state.}(hjt hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjp ubah}(h]h ]h"]h$]h&]uh1jhjU ubj)}(hX;The soft interrupt related suffix (_bh()) still disables softirq handlers. Non-PREEMPT_RT kernels disable preemption to get this effect. PREEMPT_RT kernels use a per-CPU lock for serialization which keeps preemption enabled. The lock disables softirq handlers and also prevents reentrancy due to task preemption. h](h)}(hJThe soft interrupt related suffix (_bh()) still disables softirq handlers.h]hJThe soft interrupt related suffix (_bh()) still disables softirq handlers.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(h=Non-PREEMPT_RT kernels disable preemption to get this effect.h]h=Non-PREEMPT_RT kernels disable preemption to get this effect.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(hPREEMPT_RT kernels use a per-CPU lock for serialization which keeps preemption enabled. The lock disables softirq handlers and also prevents reentrancy due to task preemption.h]hPREEMPT_RT kernels use a per-CPU lock for serialization which keeps preemption enabled. The lock disables softirq handlers and also prevents reentrancy due to task preemption.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubeh}(h]h ]h"]h$]h&]uh1jhjU ubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhKhjQ ubah}(h]h ]h"]h$]h&]uh1hhhhKhj2 hhubh)}(h;PREEMPT_RT kernels preserve all other spinlock_t semantics:h]h;PREEMPT_RT kernels preserve all other spinlock_t semantics:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj2 hhubh)}(hX]- Tasks holding a spinlock_t do not migrate. Non-PREEMPT_RT kernels avoid migration by disabling preemption. PREEMPT_RT kernels instead disable migration, which ensures that pointers to per-CPU variables remain valid even if the task is preempted. - Task state is preserved across spinlock acquisition, ensuring that the task-state rules apply to all kernel configurations. Non-PREEMPT_RT kernels leave task state untouched. However, PREEMPT_RT must change task state if the task blocks during acquisition. Therefore, it saves the current task state before blocking and the corresponding lock wakeup restores it, as shown below:: task->state = TASK_INTERRUPTIBLE lock() block() task->saved_state = task->state task->state = TASK_UNINTERRUPTIBLE schedule() lock wakeup task->state = task->saved_state Other types of wakeups would normally unconditionally set the task state to RUNNING, but that does not work here because the task must remain blocked until the lock becomes available. Therefore, when a non-lock wakeup attempts to awaken a task blocked waiting for a spinlock, it instead sets the saved state to RUNNING. Then, when the lock acquisition completes, the lock wakeup sets the task state to the saved state, in this case setting it to RUNNING:: task->state = TASK_INTERRUPTIBLE lock() block() task->saved_state = task->state task->state = TASK_UNINTERRUPTIBLE schedule() non lock wakeup task->saved_state = TASK_RUNNING lock wakeup task->state = task->saved_state This ensures that the real wakeup cannot be lost. h]h)}(hhh](j)}(hTasks holding a spinlock_t do not migrate. Non-PREEMPT_RT kernels avoid migration by disabling preemption. PREEMPT_RT kernels instead disable migration, which ensures that pointers to per-CPU variables remain valid even if the task is preempted. h]h)}(hTasks holding a spinlock_t do not migrate. Non-PREEMPT_RT kernels avoid migration by disabling preemption. PREEMPT_RT kernels instead disable migration, which ensures that pointers to per-CPU variables remain valid even if the task is preempted.h]hTasks holding a spinlock_t do not migrate. Non-PREEMPT_RT kernels avoid migration by disabling preemption. PREEMPT_RT kernels instead disable migration, which ensures that pointers to per-CPU variables remain valid even if the task is preempted.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hXTask state is preserved across spinlock acquisition, ensuring that the task-state rules apply to all kernel configurations. Non-PREEMPT_RT kernels leave task state untouched. However, PREEMPT_RT must change task state if the task blocks during acquisition. Therefore, it saves the current task state before blocking and the corresponding lock wakeup restores it, as shown below:: task->state = TASK_INTERRUPTIBLE lock() block() task->saved_state = task->state task->state = TASK_UNINTERRUPTIBLE schedule() lock wakeup task->state = task->saved_state Other types of wakeups would normally unconditionally set the task state to RUNNING, but that does not work here because the task must remain blocked until the lock becomes available. Therefore, when a non-lock wakeup attempts to awaken a task blocked waiting for a spinlock, it instead sets the saved state to RUNNING. Then, when the lock acquisition completes, the lock wakeup sets the task state to the saved state, in this case setting it to RUNNING:: task->state = TASK_INTERRUPTIBLE lock() block() task->saved_state = task->state task->state = TASK_UNINTERRUPTIBLE schedule() non lock wakeup task->saved_state = TASK_RUNNING lock wakeup task->state = task->saved_state This ensures that the real wakeup cannot be lost. h](h)}(hX~Task state is preserved across spinlock acquisition, ensuring that the task-state rules apply to all kernel configurations. Non-PREEMPT_RT kernels leave task state untouched. However, PREEMPT_RT must change task state if the task blocks during acquisition. Therefore, it saves the current task state before blocking and the corresponding lock wakeup restores it, as shown below::h]hX}Task state is preserved across spinlock acquisition, ensuring that the task-state rules apply to all kernel configurations. Non-PREEMPT_RT kernels leave task state untouched. However, PREEMPT_RT must change task state if the task blocks during acquisition. Therefore, it saves the current task state before blocking and the corresponding lock wakeup restores it, as shown below:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubh literal_block)}(hXtask->state = TASK_INTERRUPTIBLE lock() block() task->saved_state = task->state task->state = TASK_UNINTERRUPTIBLE schedule() lock wakeup task->state = task->saved_stateh]hXtask->state = TASK_INTERRUPTIBLE lock() block() task->saved_state = task->state task->state = TASK_UNINTERRUPTIBLE schedule() lock wakeup task->state = task->saved_state}hj sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj ubh)}(hXOther types of wakeups would normally unconditionally set the task state to RUNNING, but that does not work here because the task must remain blocked until the lock becomes available. Therefore, when a non-lock wakeup attempts to awaken a task blocked waiting for a spinlock, it instead sets the saved state to RUNNING. Then, when the lock acquisition completes, the lock wakeup sets the task state to the saved state, in this case setting it to RUNNING::h]hXOther types of wakeups would normally unconditionally set the task state to RUNNING, but that does not work here because the task must remain blocked until the lock becomes available. Therefore, when a non-lock wakeup attempts to awaken a task blocked waiting for a spinlock, it instead sets the saved state to RUNNING. Then, when the lock acquisition completes, the lock wakeup sets the task state to the saved state, in this case setting it to RUNNING:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubj )}(hXtask->state = TASK_INTERRUPTIBLE lock() block() task->saved_state = task->state task->state = TASK_UNINTERRUPTIBLE schedule() non lock wakeup task->saved_state = TASK_RUNNING lock wakeup task->state = task->saved_stateh]hXtask->state = TASK_INTERRUPTIBLE lock() block() task->saved_state = task->state task->state = TASK_UNINTERRUPTIBLE schedule() non lock wakeup task->saved_state = TASK_RUNNING lock wakeup task->state = task->saved_state}hj% sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj ubh)}(h1This ensures that the real wakeup cannot be lost.h]h1This ensures that the real wakeup cannot be lost.}(hj3 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM%hj ubeh}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1hhhhKhj2 hhubeh}(h]spinlock-t-and-preempt-rtah ]h"]spinlock_t and preempt_rtah$]h&]uh1hhjhhhhhKubeh}(h]raw-spinlock-t-and-spinlock-tah ]h"]raw_spinlock_t and spinlock_tah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hrwlock_th]hrwlock_t}(hjf hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjc hhhhhM)ubh)}(h@rwlock_t is a multiple readers and single writer lock mechanism.h]h@rwlock_t is a multiple readers and single writer lock mechanism.}(hjt hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM+hjc hhubh)}(hNon-PREEMPT_RT kernels implement rwlock_t as a spinning lock and the suffix rules of spinlock_t apply accordingly. The implementation is fair, thus preventing writer starvation.h]hNon-PREEMPT_RT kernels implement rwlock_t as a spinning lock and the suffix rules of spinlock_t apply accordingly. The implementation is fair, thus preventing writer starvation.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM-hjc hhubh)}(hhh](h)}(hrwlock_t and PREEMPT_RTh]hrwlock_t and PREEMPT_RT}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhM2ubh)}(hePREEMPT_RT kernels map rwlock_t to a separate rt_mutex-based implementation, thus changing semantics:h]hePREEMPT_RT kernels map rwlock_t to a separate rt_mutex-based implementation, thus changing semantics:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM4hj hhubh)}(hX- All the spinlock_t changes also apply to rwlock_t. - Because an rwlock_t writer cannot grant its priority to multiple readers, a preempted low-priority reader will continue holding its lock, thus starving even high-priority writers. In contrast, because readers can grant their priority to a writer, a preempted low-priority writer will have its priority boosted until it releases the lock, thus preventing that writer from starving readers. h]h)}(hhh](j)}(h3All the spinlock_t changes also apply to rwlock_t. h]h)}(h2All the spinlock_t changes also apply to rwlock_t.h]h2All the spinlock_t changes also apply to rwlock_t.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM7hj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hXBecause an rwlock_t writer cannot grant its priority to multiple readers, a preempted low-priority reader will continue holding its lock, thus starving even high-priority writers. In contrast, because readers can grant their priority to a writer, a preempted low-priority writer will have its priority boosted until it releases the lock, thus preventing that writer from starving readers. h]h)}(hXBecause an rwlock_t writer cannot grant its priority to multiple readers, a preempted low-priority reader will continue holding its lock, thus starving even high-priority writers. In contrast, because readers can grant their priority to a writer, a preempted low-priority writer will have its priority boosted until it releases the lock, thus preventing that writer from starving readers.h]hXBecause an rwlock_t writer cannot grant its priority to multiple readers, a preempted low-priority reader will continue holding its lock, thus starving even high-priority writers. In contrast, because readers can grant their priority to a writer, a preempted low-priority writer will have its priority boosted until it releases the lock, thus preventing that writer from starving readers.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM9hj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhM7hj ubah}(h]h ]h"]h$]h&]uh1hhhhM7hj hhubeh}(h]rwlock-t-and-preempt-rtah ]h"]rwlock_t and preempt_rtah$]h&]uh1hhjc hhhhhM2ubeh}(h]rwlock-tah ]h"]rwlock_tah$]h&]uh1hhhhhhhhM)ubh)}(hhh](h)}(hPREEMPT_RT caveatsh]hPREEMPT_RT caveats}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMBubh)}(hhh](h)}(hlocal_lock on RTh]hlocal_lock on RT}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMEubh)}(hThe mapping of local_lock to spinlock_t on PREEMPT_RT kernels has a few implications. For example, on a non-PREEMPT_RT kernel the following code sequence works as expected::h]hThe mapping of local_lock to spinlock_t on PREEMPT_RT kernels has a few implications. For example, on a non-PREEMPT_RT kernel the following code sequence works as expected:}(hj$ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhj hhubj )}(h2local_lock_irq(&local_lock); raw_spin_lock(&lock);h]h2local_lock_irq(&local_lock); raw_spin_lock(&lock);}hj2 sbah}(h]h ]h"]h$]h&]hhuh1j hhhMKhj hhubh)}(hand is fully equivalent to::h]hand is fully equivalent to:}(hj@ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMNhj hhubj )}(hraw_spin_lock_irq(&lock);h]hraw_spin_lock_irq(&lock);}hjN sbah}(h]h ]h"]h$]h&]hhuh1j hhhMPhj hhubh)}(hXOn a PREEMPT_RT kernel this code sequence breaks because local_lock_irq() is mapped to a per-CPU spinlock_t which neither disables interrupts nor preemption. The following code sequence works perfectly correct on both PREEMPT_RT and non-PREEMPT_RT kernels::h]hXOn a PREEMPT_RT kernel this code sequence breaks because local_lock_irq() is mapped to a per-CPU spinlock_t which neither disables interrupts nor preemption. The following code sequence works perfectly correct on both PREEMPT_RT and non-PREEMPT_RT kernels:}(hj\ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMRhj hhubj )}(h.local_lock_irq(&local_lock); spin_lock(&lock);h]h.local_lock_irq(&local_lock); spin_lock(&lock);}hjj sbah}(h]h ]h"]h$]h&]hhuh1j hhhMWhj hhubh)}(hAnother caveat with local locks is that each local_lock has a specific protection scope. So the following substitution is wrong::h]hAnother caveat with local locks is that each local_lock has a specific protection scope. So the following substitution is wrong:}(hjx hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMZhj hhubj )}(hXfunc1() { local_irq_save(flags); -> local_lock_irqsave(&local_lock_1, flags); func3(); local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock_1, flags); } func2() { local_irq_save(flags); -> local_lock_irqsave(&local_lock_2, flags); func3(); local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock_2, flags); } func3() { lockdep_assert_irqs_disabled(); access_protected_data(); }h]hXfunc1() { local_irq_save(flags); -> local_lock_irqsave(&local_lock_1, flags); func3(); local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock_1, flags); } func2() { local_irq_save(flags); -> local_lock_irqsave(&local_lock_2, flags); func3(); local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock_2, flags); } func3() { lockdep_assert_irqs_disabled(); access_protected_data(); }}hj sbah}(h]h ]h"]h$]h&]hhuh1j hhhM]hj hhubh)}(hXnOn a non-PREEMPT_RT kernel this works correctly, but on a PREEMPT_RT kernel local_lock_1 and local_lock_2 are distinct and cannot serialize the callers of func3(). Also the lockdep assert will trigger on a PREEMPT_RT kernel because local_lock_irqsave() does not disable interrupts due to the PREEMPT_RT-specific semantics of spinlock_t. The correct substitution is::h]hXmOn a non-PREEMPT_RT kernel this works correctly, but on a PREEMPT_RT kernel local_lock_1 and local_lock_2 are distinct and cannot serialize the callers of func3(). Also the lockdep assert will trigger on a PREEMPT_RT kernel because local_lock_irqsave() does not disable interrupts due to the PREEMPT_RT-specific semantics of spinlock_t. The correct substitution is:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMqhj hhubj )}(hXfunc1() { local_irq_save(flags); -> local_lock_irqsave(&local_lock, flags); func3(); local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock, flags); } func2() { local_irq_save(flags); -> local_lock_irqsave(&local_lock, flags); func3(); local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock, flags); } func3() { lockdep_assert_held(&local_lock); access_protected_data(); }h]hXfunc1() { local_irq_save(flags); -> local_lock_irqsave(&local_lock, flags); func3(); local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock, flags); } func2() { local_irq_save(flags); -> local_lock_irqsave(&local_lock, flags); func3(); local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock, flags); } func3() { lockdep_assert_held(&local_lock); access_protected_data(); }}hj sbah}(h]h ]h"]h$]h&]hhuh1j hhhMwhj hhubeh}(h]local-lock-on-rtah ]h"]local_lock on rtah$]h&]uh1hhj hhhhhMEubh)}(hhh](h)}(hspinlock_t and rwlock_th]hspinlock_t and rwlock_t}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubh)}(hThe changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels have a few implications. For example, on a non-PREEMPT_RT kernel the following code sequence works as expected::h]hThe changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels have a few implications. For example, on a non-PREEMPT_RT kernel the following code sequence works as expected:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj )}(h&local_irq_disable(); spin_lock(&lock);h]h&local_irq_disable(); spin_lock(&lock);}hj sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj hhubh)}(hand is fully equivalent to::h]hand is fully equivalent to:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj )}(hspin_lock_irq(&lock);h]hspin_lock_irq(&lock);}hj sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj hhubh)}(hlock); p->count += this_cpu_read(var2);h]hZstruct foo *p = get_cpu_ptr(&var1); spin_lock(&p->lock); p->count += this_cpu_read(var2);}hj+ sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj hhubh)}(hXThis is correct code on a non-PREEMPT_RT kernel, but on a PREEMPT_RT kernel this breaks. The PREEMPT_RT-specific change of spinlock_t semantics does not allow to acquire p->lock because get_cpu_ptr() implicitly disables preemption. The following substitution works on both kernels::h]hXThis is correct code on a non-PREEMPT_RT kernel, but on a PREEMPT_RT kernel this breaks. The PREEMPT_RT-specific change of spinlock_t semantics does not allow to acquire p->lock because get_cpu_ptr() implicitly disables preemption. The following substitution works on both kernels:}(hj9 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj )}(hqstruct foo *p; migrate_disable(); p = this_cpu_ptr(&var1); spin_lock(&p->lock); p->count += this_cpu_read(var2);h]hqstruct foo *p; migrate_disable(); p = this_cpu_ptr(&var1); spin_lock(&p->lock); p->count += this_cpu_read(var2);}hjG sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj hhubh)}(hmigrate_disable() ensures that the task is pinned on the current CPU which in turn guarantees that the per-CPU access to var1 and var2 are staying on the same CPU while the task remains preemptible.h]hmigrate_disable() ensures that the task is pinned on the current CPU which in turn guarantees that the per-CPU access to var1 and var2 are staying on the same CPU while the task remains preemptible.}(hjU hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hLThe migrate_disable() substitution is not valid for the following scenario::h]hKThe migrate_disable() substitution is not valid for the following scenario:}(hjc hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj )}(h^func() { struct foo *p; migrate_disable(); p = this_cpu_ptr(&var1); p->val = func2();h]h^func() { struct foo *p; migrate_disable(); p = this_cpu_ptr(&var1); p->val = func2();}hjq sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj hhubh)}(hThis breaks because migrate_disable() does not protect against reentrancy from a preempting task. A correct substitution for this case is::h]hThis breaks because migrate_disable() does not protect against reentrancy from a preempting task. A correct substitution for this case is:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj )}(hbfunc() { struct foo *p; local_lock(&foo_lock); p = this_cpu_ptr(&var1); p->val = func2();h]hbfunc() { struct foo *p; local_lock(&foo_lock); p = this_cpu_ptr(&var1); p->val = func2();}hj sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj hhubh)}(hOn a non-PREEMPT_RT kernel this protects against reentrancy by disabling preemption. On a PREEMPT_RT kernel this is achieved by acquiring the underlying per-CPU spinlock.h]hOn a non-PREEMPT_RT kernel this protects against reentrancy by disabling preemption. On a PREEMPT_RT kernel this is achieved by acquiring the underlying per-CPU spinlock.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubeh}(h]spinlock-t-and-rwlock-tah ]h"]spinlock_t and rwlock_tah$]h&]uh1hhj hhhhhMubh)}(hhh](h)}(hraw_spinlock_t on RTh]hraw_spinlock_t on RT}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubh)}(hX"Acquiring a raw_spinlock_t disables preemption and possibly also interrupts, so the critical section must avoid acquiring a regular spinlock_t or rwlock_t, for example, the critical section must avoid allocating memory. Thus, on a non-PREEMPT_RT kernel the following code works perfectly::h]hX!Acquiring a raw_spinlock_t disables preemption and possibly also interrupts, so the critical section must avoid acquiring a regular spinlock_t or rwlock_t, for example, the critical section must avoid allocating memory. Thus, on a non-PREEMPT_RT kernel the following code works perfectly:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj )}(h:raw_spin_lock(&lock); p = kmalloc(sizeof(*p), GFP_ATOMIC);h]h:raw_spin_lock(&lock); p = kmalloc(sizeof(*p), GFP_ATOMIC);}hj sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj hhubh)}(hX;But this code fails on PREEMPT_RT kernels because the memory allocator is fully preemptible and therefore cannot be invoked from truly atomic contexts. However, it is perfectly fine to invoke the memory allocator while holding normal non-raw spinlocks because they do not disable preemption on PREEMPT_RT kernels::h]hX:But this code fails on PREEMPT_RT kernels because the memory allocator is fully preemptible and therefore cannot be invoked from truly atomic contexts. However, it is perfectly fine to invoke the memory allocator while holding normal non-raw spinlocks because they do not disable preemption on PREEMPT_RT kernels:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj )}(h6spin_lock(&lock); p = kmalloc(sizeof(*p), GFP_ATOMIC);h]h6spin_lock(&lock); p = kmalloc(sizeof(*p), GFP_ATOMIC);}hj sbah}(h]h ]h"]h$]h&]hhuh1j hhhMhj hhubeh}(h]raw-spinlock-t-on-rtah ]h"]raw_spinlock_t on rtah$]h&]uh1hhj hhhhhMubh)}(hhh](h)}(h bit spinlocksh]h bit spinlocks}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubh)}(hPREEMPT_RT cannot substitute bit spinlocks because a single bit is too small to accommodate an RT-mutex. Therefore, the semantics of bit spinlocks are preserved on PREEMPT_RT kernels, so that the raw_spinlock_t caveats also apply to bit spinlocks.h]hPREEMPT_RT cannot substitute bit spinlocks because a single bit is too small to accommodate an RT-mutex. Therefore, the semantics of bit spinlocks are preserved on PREEMPT_RT kernels, so that the raw_spinlock_t caveats also apply to bit spinlocks.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hXYSome bit spinlocks are replaced with regular spinlock_t for PREEMPT_RT using conditional (#ifdef'ed) code changes at the usage site. In contrast, usage-site changes are not needed for the spinlock_t substitution. Instead, conditionals in header files and the core locking implementation enable the compiler to do the substitution transparently.h]hX[Some bit spinlocks are replaced with regular spinlock_t for PREEMPT_RT using conditional (#ifdef’ed) code changes at the usage site. In contrast, usage-site changes are not needed for the spinlock_t substitution. Instead, conditionals in header files and the core locking implementation enable the compiler to do the substitution transparently.}(hj! hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubeh}(h] bit-spinlocksah ]h"] bit spinlocksah$]h&]uh1hhj hhhhhMubeh}(h]preempt-rt-caveatsah ]h"]preempt_rt caveatsah$]h&]uh1hhhhhhhhMBubh)}(hhh](h)}(hLock type nesting rulesh]hLock type nesting rules}(hjB hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj? hhhhhMubh)}(hThe most basic rules are:h]hThe most basic rules are:}(hjP hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj? hhubh)}(hX{- Lock types of the same lock category (sleeping, CPU local, spinning) can nest arbitrarily as long as they respect the general lock ordering rules to prevent deadlocks. - Sleeping lock types cannot nest inside CPU local and spinning lock types. - CPU local and spinning lock types can nest inside sleeping lock types. - Spinning lock types can nest inside all lock types h]h)}(hhh](j)}(hLock types of the same lock category (sleeping, CPU local, spinning) can nest arbitrarily as long as they respect the general lock ordering rules to prevent deadlocks. h]h)}(hLock types of the same lock category (sleeping, CPU local, spinning) can nest arbitrarily as long as they respect the general lock ordering rules to prevent deadlocks.h]hLock types of the same lock category (sleeping, CPU local, spinning) can nest arbitrarily as long as they respect the general lock ordering rules to prevent deadlocks.}(hji hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhje ubah}(h]h ]h"]h$]h&]uh1jhjb ubj)}(hJSleeping lock types cannot nest inside CPU local and spinning lock types. h]h)}(hISleeping lock types cannot nest inside CPU local and spinning lock types.h]hISleeping lock types cannot nest inside CPU local and spinning lock types.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj} ubah}(h]h ]h"]h$]h&]uh1jhjb ubj)}(hGCPU local and spinning lock types can nest inside sleeping lock types. h]h)}(hFCPU local and spinning lock types can nest inside sleeping lock types.h]hFCPU local and spinning lock types can nest inside sleeping lock types.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhjb ubj)}(h3Spinning lock types can nest inside all lock types h]h)}(h2Spinning lock types can nest inside all lock typesh]h2Spinning lock types can nest inside all lock types}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhjb ubeh}(h]h ]h"]h$]h&]jQjRuh1hhhhMhj^ ubah}(h]h ]h"]h$]h&]uh1hhhhMhj? hhubh)}(h9These constraints apply both in PREEMPT_RT and otherwise.h]h9These constraints apply both in PREEMPT_RT and otherwise.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj? hhubh)}(hXThe fact that PREEMPT_RT changes the lock category of spinlock_t and rwlock_t from spinning to sleeping and substitutes local_lock with a per-CPU spinlock_t means that they cannot be acquired while holding a raw spinlock. This results in the following nesting ordering:h]hXThe fact that PREEMPT_RT changes the lock category of spinlock_t and rwlock_t from spinning to sleeping and substitutes local_lock with a per-CPU spinlock_t means that they cannot be acquired while holding a raw spinlock. This results in the following nesting ordering:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj? hhubh)}(hZ1) Sleeping locks 2) spinlock_t, rwlock_t, local_lock 3) raw_spinlock_t and bit spinlocks h]henumerated_list)}(hhh](j)}(hSleeping locksh]h)}(hj h]hSleeping locks}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(h spinlock_t, rwlock_t, local_lockh]h)}(hjh]h spinlock_t, rwlock_t, local_lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(h!raw_spinlock_t and bit spinlocks h]h)}(h raw_spinlock_t and bit spinlocksh]h raw_spinlock_t and bit spinlocks}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj$ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]enumtypearabicprefixhsuffix)uh1j hj ubah}(h]h ]h"]h$]h&]uh1hhhhMhj? hhubh)}(hZLockdep will complain if these constraints are violated, both in PREEMPT_RT and otherwise.h]hZLockdep will complain if these constraints are violated, both in PREEMPT_RT and otherwise.}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj? hhubeh}(h]lock-type-nesting-rulesah ]h"]lock type nesting rulesah$]h&]uh1hhhhhhhhMubeh}(h](lock-types-and-their-rulesheh ]h"](lock types and their ruleskernel_hacking_locktypeseh$]h&]uh1hhhhhhhhKexpect_referenced_by_name}jihsexpect_referenced_by_id}hhsubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksjfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjerror_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}h]hasnameids}(jihjhjejljijjjjjjjjjj jSjPjjjjj1j.j)j&jjjjjjj` j] jjj/ j, jX jU j j j j j< j9 j j j j j j j4 j1 j`j]u nametypes}(jijhjljjjjjjSjjj1j)jjjj` jj/ jX j j j< j j j j4 j`uh}(hhjehjihjjojjjjjjj jjPjjjVjjj.jj&jjj4jj6jjj] jjjj, jjU j2 j jc j j j9 j j j j j j j j1 j j]j? u footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages]hsystem_message)}(hhh]h)}(hhh]h>Hyperlink target "kernel-hacking-locktypes" is not referenced.}hjsbah}(h]h ]h"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]levelKtypeINFOsourcehlineKuh1juba transformerN include_log] decorationNhhub.