€•`‚Œsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ2/translations/zh_CN/core-api/real-time/differences”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/zh_TW/core-api/real-time/differences”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/it_IT/core-api/real-time/differences”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/ja_JP/core-api/real-time/differences”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/ko_KR/core-api/real-time/differences”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/sp_SP/core-api/real-time/differences”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒcomment”“”)”}”(hŒ SPDX-License-Identifier: GPL-2.0”h]”hŒ SPDX-License-Identifier: GPL-2.0”…””}”hh£sbah}”(h]”h ]”h"]”h$]”h&]”Œ xml:space”Œpreserve”uh1h¡hhhžhhŸŒL/var/lib/git/docbuild/linux/Documentation/core-api/real-time/differences.rst”h KubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒHow realtime kernels differ”h]”hŒHow realtime kernels differ”…””}”(hh»hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hh¶hžhhŸh³h KubhŒ field_list”“”)”}”(hhh]”hŒfield”“”)”}”(hhh]”(hŒ field_name”“”)”}”(hŒAuthor”h]”hŒAuthor”…””}”(hhÕhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÓhhÐhŸh³h KubhŒ field_body”“”)”}”(hŒ2Sebastian Andrzej Siewior ”h]”hŒ paragraph”“”)”}”(hŒ1Sebastian Andrzej Siewior ”h]”(hŒSebastian Andrzej Siewior <”…””}”(hhëhžhhŸNh NubhŒ reference”“”)”}”(hŒbigeasy@linutronix.de”h]”hŒbigeasy@linutronix.de”…””}”(hhõhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”Œrefuri”Œmailto:bigeasy@linutronix.de”uh1hóhhëubhŒ>”…””}”(hhëhžhhŸNh Nubeh}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Khhåubah}”(h]”h ]”h"]”h$]”h&]”uh1hãhhÐubeh}”(h]”h ]”h"]”h$]”h&]”uh1hÎhŸh³h KhhËhžhubah}”(h]”h ]”h"]”h$]”h&]”uh1hÉhh¶hžhhŸh³h Kubhµ)”}”(hhh]”(hº)”}”(hŒPreface”h]”hŒPreface”…””}”(hj$hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj!hžhhŸh³h K ubhê)”}”(hX$With forced-threaded interrupts and sleeping spin locks, code paths that previously caused long scheduling latencies have been made preemptible and moved into process context. This allows the scheduler to manage them more effectively and respond to higher-priority tasks with reduced latency.”h]”hX$With forced-threaded interrupts and sleeping spin locks, code paths that previously caused long scheduling latencies have been made preemptible and moved into process context. This allows the scheduler to manage them more effectively and respond to higher-priority tasks with reduced latency.”…””}”(hj2hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K hj!hžhubhê)”}”(hŒ€The following chapters provide an overview of key differences between a PREEMPT_RT kernel and a standard, non-PREEMPT_RT kernel.”h]”hŒ€The following chapters provide an overview of key differences between a PREEMPT_RT kernel and a standard, non-PREEMPT_RT kernel.”…””}”(hj@hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Khj!hžhubeh}”(h]”Œpreface”ah ]”h"]”Œpreface”ah$]”h&]”uh1h´hh¶hžhhŸh³h K ubhµ)”}”(hhh]”(hº)”}”(hŒLocking”h]”hŒLocking”…””}”(hjYhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjVhžhhŸh³h Kubhê)”}”(hX‡Spinning locks such as spinlock_t are used to provide synchronization for data structures accessed from both interrupt context and process context. For this reason, locking functions are also available with the _irq() or _irqsave() suffixes, which disable interrupts before acquiring the lock. This ensures that the lock can be safely acquired in process context when interrupts are enabled.”h]”hX‡Spinning locks such as spinlock_t are used to provide synchronization for data structures accessed from both interrupt context and process context. For this reason, locking functions are also available with the _irq() or _irqsave() suffixes, which disable interrupts before acquiring the lock. This ensures that the lock can be safely acquired in process context when interrupts are enabled.”…””}”(hjghžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KhjVhžhubhê)”}”(hŒÒHowever, on a PREEMPT_RT system, interrupts are forced-threaded and no longer run in hard IRQ context. As a result, there is no need to disable interrupts as part of the locking procedure when using spinlock_t.”h]”hŒÒHowever, on a PREEMPT_RT system, interrupts are forced-threaded and no longer run in hard IRQ context. As a result, there is no need to disable interrupts as part of the locking procedure when using spinlock_t.”…””}”(hjuhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KhjVhžhubhê)”}”(hXFor low-level core components such as interrupt handling, the scheduler, or the timer subsystem the kernel uses raw_spinlock_t. This lock type preserves traditional semantics: it disables preemption and, when used with _irq() or _irqsave(), also disables interrupts. This ensures proper synchronization in critical sections that must remain non-preemptible or with interrupts disabled.”h]”hXFor low-level core components such as interrupt handling, the scheduler, or the timer subsystem the kernel uses raw_spinlock_t. This lock type preserves traditional semantics: it disables preemption and, when used with _irq() or _irqsave(), also disables interrupts. This ensures proper synchronization in critical sections that must remain non-preemptible or with interrupts disabled.”…””}”(hjƒhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K!hjVhžhubeh}”(h]”Œlocking”ah ]”h"]”Œlocking”ah$]”h&]”uh1h´hh¶hžhhŸh³h Kubhµ)”}”(hhh]”(hº)”}”(hŒExecution context”h]”hŒExecution context”…””}”(hjœhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj™hžhhŸh³h K(ubhê)”}”(hXWInterrupt handling in a PREEMPT_RT system is invoked in process context through the use of threaded interrupts. Other parts of the kernel also shift their execution into threaded context by different mechanisms. The goal is to keep execution paths preemptible, allowing the scheduler to interrupt them when a higher-priority task needs to run.”h]”hXWInterrupt handling in a PREEMPT_RT system is invoked in process context through the use of threaded interrupts. Other parts of the kernel also shift their execution into threaded context by different mechanisms. The goal is to keep execution paths preemptible, allowing the scheduler to interrupt them when a higher-priority task needs to run.”…””}”(hjªhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K*hj™hžhubhê)”}”(hŒmBelow is an overview of the kernel subsystems involved in this transition to threaded, preemptible execution.”h]”hŒmBelow is an overview of the kernel subsystems involved in this transition to threaded, preemptible execution.”…””}”(hj¸hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K0hj™hžhubhµ)”}”(hhh]”(hº)”}”(hŒInterrupt handling”h]”hŒInterrupt handling”…””}”(hjÉhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjÆhžhhŸh³h K4ubhê)”}”(hŒ¨All interrupts are forced-threaded in a PREEMPT_RT system. The exceptions are interrupts that are requested with the IRQF_NO_THREAD, IRQF_PERCPU, or IRQF_ONESHOT flags.”h]”hŒ¨All interrupts are forced-threaded in a PREEMPT_RT system. The exceptions are interrupts that are requested with the IRQF_NO_THREAD, IRQF_PERCPU, or IRQF_ONESHOT flags.”…””}”(hj×hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K6hjÆhžhubhê)”}”(hŒøThe IRQF_ONESHOT flag is used together with threaded interrupts, meaning those registered using request_threaded_irq() and providing only a threaded handler. Its purpose is to keep the interrupt line masked until the threaded handler has completed.”h]”hŒøThe IRQF_ONESHOT flag is used together with threaded interrupts, meaning those registered using request_threaded_irq() and providing only a threaded handler. Its purpose is to keep the interrupt line masked until the threaded handler has completed.”…””}”(hjåhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K:hjÆhžhubhê)”}”(hXIf a primary handler is also provided in this case, it is essential that the handler does not acquire any sleeping locks, as it will not be threaded. The handler should be minimal and must avoid introducing delays, such as busy-waiting on hardware registers.”h]”hXIf a primary handler is also provided in this case, it is essential that the handler does not acquire any sleeping locks, as it will not be threaded. The handler should be minimal and must avoid introducing delays, such as busy-waiting on hardware registers.”…””}”(hjóhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K?hjÆhžhubeh}”(h]”Œinterrupt-handling”ah ]”h"]”Œinterrupt handling”ah$]”h&]”uh1h´hj™hžhhŸh³h K4ubhµ)”}”(hhh]”(hº)”}”(hŒ%Soft interrupts, bottom half handling”h]”hŒ%Soft interrupts, bottom half handling”…””}”(hj hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj hžhhŸh³h KFubhê)”}”(hXÓSoft interrupts are raised by the interrupt handler and are executed after the handler returns. Since they run in thread context, they can be preempted by other threads. Do not assume that softirq context runs with preemption disabled. This means you must not rely on mechanisms like local_bh_disable() in process context to protect per-CPU variables. Because softirq handlers are preemptible under PREEMPT_RT, this approach does not provide reliable synchronization.”h]”hXÓSoft interrupts are raised by the interrupt handler and are executed after the handler returns. Since they run in thread context, they can be preempted by other threads. Do not assume that softirq context runs with preemption disabled. This means you must not rely on mechanisms like local_bh_disable() in process context to protect per-CPU variables. Because softirq handlers are preemptible under PREEMPT_RT, this approach does not provide reliable synchronization.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KHhj hžhubhê)”}”(hXIf this kind of protection is required for performance reasons, consider using local_lock_nested_bh(). On non-PREEMPT_RT kernels, this allows lockdep to verify that bottom halves are disabled. On PREEMPT_RT systems, it adds the necessary locking to ensure proper protection.”h]”hXIf this kind of protection is required for performance reasons, consider using local_lock_nested_bh(). On non-PREEMPT_RT kernels, this allows lockdep to verify that bottom halves are disabled. On PREEMPT_RT systems, it adds the necessary locking to ensure proper protection.”…””}”(hj(hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KPhj hžhubhê)”}”(hŒxUsing local_lock_nested_bh() also makes the locking scope explicit and easier for readers and maintainers to understand.”h]”hŒxUsing local_lock_nested_bh() also makes the locking scope explicit and easier for readers and maintainers to understand.”…””}”(hj6hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KUhj hžhubeh}”(h]”Œ$soft-interrupts-bottom-half-handling”ah ]”h"]”Œ%soft interrupts, bottom half handling”ah$]”h&]”uh1h´hj™hžhhŸh³h KFubhµ)”}”(hhh]”(hº)”}”(hŒper-CPU variables”h]”hŒper-CPU variables”…””}”(hjOhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjLhžhhŸh³h KZubhê)”}”(hŒµProtecting access to per-CPU variables solely by using preempt_disable() should be avoided, especially if the critical section has unbounded runtime or may call APIs that can sleep.”h]”hŒµProtecting access to per-CPU variables solely by using preempt_disable() should be avoided, especially if the critical section has unbounded runtime or may call APIs that can sleep.”…””}”(hj]hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K\hjLhžhubhê)”}”(hXDIf using a spinlock_t is considered too costly for performance reasons, consider using local_lock_t. On non-PREEMPT_RT configurations, this introduces no runtime overhead when lockdep is disabled. With lockdep enabled, it verifies that the lock is only acquired in process context and never from softirq or hard IRQ context.”h]”hXDIf using a spinlock_t is considered too costly for performance reasons, consider using local_lock_t. On non-PREEMPT_RT configurations, this introduces no runtime overhead when lockdep is disabled. With lockdep enabled, it verifies that the lock is only acquired in process context and never from softirq or hard IRQ context.”…””}”(hjkhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K`hjLhžhubhê)”}”(hŒ«On a PREEMPT_RT kernel, local_lock_t is implemented using a per-CPU spinlock_t, which provides safe local protection for per-CPU data while keeping the system preemptible.”h]”hŒ«On a PREEMPT_RT kernel, local_lock_t is implemented using a per-CPU spinlock_t, which provides safe local protection for per-CPU data while keeping the system preemptible.”…””}”(hjyhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KfhjLhžhubhê)”}”(hXBecause spinlock_t on PREEMPT_RT does not disable preemption, it cannot be used to protect per-CPU data by relying on implicit preemption disabling. If this inherited preemption disabling is essential and if local_lock_t cannot be used due to performance constraints, brevity of the code, or abstraction boundaries within an API then preempt_disable_nested() may be a suitable alternative. On non-PREEMPT_RT kernels, it verifies with lockdep that preemption is already disabled. On PREEMPT_RT, it explicitly disables preemption.”h]”hXBecause spinlock_t on PREEMPT_RT does not disable preemption, it cannot be used to protect per-CPU data by relying on implicit preemption disabling. If this inherited preemption disabling is essential and if local_lock_t cannot be used due to performance constraints, brevity of the code, or abstraction boundaries within an API then preempt_disable_nested() may be a suitable alternative. On non-PREEMPT_RT kernels, it verifies with lockdep that preemption is already disabled. On PREEMPT_RT, it explicitly disables preemption.”…””}”(hj‡hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KjhjLhžhubeh}”(h]”Œper-cpu-variables”ah ]”h"]”Œper-cpu variables”ah$]”h&]”uh1h´hj™hžhhŸh³h KZubhµ)”}”(hhh]”(hº)”}”(hŒTimers”h]”hŒTimers”…””}”(hj hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjhžhhŸh³h Ksubhê)”}”(hŒ©By default, an hrtimer is executed in hard interrupt context. The exception is timers initialized with the HRTIMER_MODE_SOFT flag, which are executed in softirq context.”h]”hŒ©By default, an hrtimer is executed in hard interrupt context. The exception is timers initialized with the HRTIMER_MODE_SOFT flag, which are executed in softirq context.”…””}”(hj®hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Kuhjhžhubhê)”}”(hX´On a PREEMPT_RT kernel, this behavior is reversed: hrtimers are executed in softirq context by default, typically within the ktimersd thread. This thread runs at the lowest real-time priority, ensuring it executes before any SCHED_OTHER tasks but does not interfere with higher-priority real-time threads. To explicitly request execution in hard interrupt context on PREEMPT_RT, the timer must be marked with the HRTIMER_MODE_HARD flag.”h]”hX´On a PREEMPT_RT kernel, this behavior is reversed: hrtimers are executed in softirq context by default, typically within the ktimersd thread. This thread runs at the lowest real-time priority, ensuring it executes before any SCHED_OTHER tasks but does not interfere with higher-priority real-time threads. To explicitly request execution in hard interrupt context on PREEMPT_RT, the timer must be marked with the HRTIMER_MODE_HARD flag.”…””}”(hj¼hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Kyhjhžhubeh}”(h]”Œtimers”ah ]”h"]”Œtimers”ah$]”h&]”uh1h´hj™hžhhŸh³h Ksubhµ)”}”(hhh]”(hº)”}”(hŒMemory allocation”h]”hŒMemory allocation”…””}”(hjÕhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjÒhžhhŸh³h Kubhê)”}”(hXzThe memory allocation APIs, such as kmalloc() and alloc_pages(), require a gfp_t flag to indicate the allocation context. On non-PREEMPT_RT kernels, it is necessary to use GFP_ATOMIC when allocating memory from interrupt context or from sections where preemption is disabled. This is because the allocator must not sleep in these contexts waiting for memory to become available.”h]”hXzThe memory allocation APIs, such as kmalloc() and alloc_pages(), require a gfp_t flag to indicate the allocation context. On non-PREEMPT_RT kernels, it is necessary to use GFP_ATOMIC when allocating memory from interrupt context or from sections where preemption is disabled. This is because the allocator must not sleep in these contexts waiting for memory to become available.”…””}”(hjãhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KƒhjÒhžhubhê)”}”(hXHowever, this approach does not work on PREEMPT_RT kernels. The memory allocator in PREEMPT_RT uses sleeping locks internally, which cannot be acquired when preemption is disabled. Fortunately, this is generally not a problem, because PREEMPT_RT moves most contexts that would traditionally run with preemption or interrupts disabled into threaded context, where sleeping is allowed.”h]”hXHowever, this approach does not work on PREEMPT_RT kernels. The memory allocator in PREEMPT_RT uses sleeping locks internally, which cannot be acquired when preemption is disabled. Fortunately, this is generally not a problem, because PREEMPT_RT moves most contexts that would traditionally run with preemption or interrupts disabled into threaded context, where sleeping is allowed.”…””}”(hjñhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K‰hjÒhžhubhê)”}”(hŒ¤What remains problematic is code that explicitly disables preemption or interrupts. In such cases, memory allocation must be performed outside the critical section.”h]”hŒ¤What remains problematic is code that explicitly disables preemption or interrupts. In such cases, memory allocation must be performed outside the critical section.”…””}”(hjÿhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KhjÒhžhubhê)”}”(hŒ½This restriction also applies to memory deallocation routines such as kfree() and free_pages(), which may also involve internal locking and must not be called from non-preemptible contexts.”h]”hŒ½This restriction also applies to memory deallocation routines such as kfree() and free_pages(), which may also involve internal locking and must not be called from non-preemptible contexts.”…””}”(hj hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K”hjÒhžhubeh}”(h]”Œmemory-allocation”ah ]”h"]”Œmemory allocation”ah$]”h&]”uh1h´hj™hžhhŸh³h Kubhµ)”}”(hhh]”(hº)”}”(hŒIRQ work”h]”hŒIRQ work”…””}”(hj&hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj#hžhhŸh³h K™ubhê)”}”(hX The irq_work API provides a mechanism to schedule a callback in interrupt context. It is designed for use in contexts where traditional scheduling is not possible, such as from within NMI handlers or from inside the scheduler, where using a workqueue would be unsafe.”h]”hX The irq_work API provides a mechanism to schedule a callback in interrupt context. It is designed for use in contexts where traditional scheduling is not possible, such as from within NMI handlers or from inside the scheduler, where using a workqueue would be unsafe.”…””}”(hj4hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K›hj#hžhubhê)”}”(hŒÐOn non-PREEMPT_RT systems, all irq_work items are executed immediately in interrupt context. Items marked with IRQ_WORK_LAZY are deferred until the next timer tick but are still executed in interrupt context.”h]”hŒÐOn non-PREEMPT_RT systems, all irq_work items are executed immediately in interrupt context. Items marked with IRQ_WORK_LAZY are deferred until the next timer tick but are still executed in interrupt context.”…””}”(hjBhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K hj#hžhubhê)”}”(hXwOn PREEMPT_RT systems, the execution model changes. Because irq_work callbacks may acquire sleeping locks or have unbounded execution time, they are handled in thread context by a per-CPU irq_work kernel thread. This thread runs at the lowest real-time priority, ensuring it executes before any SCHED_OTHER tasks but does not interfere with higher-priority real-time threads.”h]”hXwOn PREEMPT_RT systems, the execution model changes. Because irq_work callbacks may acquire sleeping locks or have unbounded execution time, they are handled in thread context by a per-CPU irq_work kernel thread. This thread runs at the lowest real-time priority, ensuring it executes before any SCHED_OTHER tasks but does not interfere with higher-priority real-time threads.”…””}”(hjPhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K¤hj#hžhubhê)”}”(hŒëThe exception are work items marked with IRQ_WORK_HARD_IRQ, which are still executed in hard interrupt context. Lazy items (IRQ_WORK_LAZY) continue to be deferred until the next timer tick and are also executed by the irq_work/ thread.”h]”hŒëThe exception are work items marked with IRQ_WORK_HARD_IRQ, which are still executed in hard interrupt context. Lazy items (IRQ_WORK_LAZY) continue to be deferred until the next timer tick and are also executed by the irq_work/ thread.”…””}”(hj^hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Kªhj#hžhubeh}”(h]”Œirq-work”ah ]”h"]”Œirq work”ah$]”h&]”uh1h´hj™hžhhŸh³h K™ubhµ)”}”(hhh]”(hº)”}”(hŒ RCU callbacks”h]”hŒ RCU callbacks”…””}”(hjwhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjthžhhŸh³h K°ubhê)”}”(hXsRCU callbacks are invoked by default in softirq context. Their execution is important because, depending on the use case, they either free memory or ensure progress in state transitions. Running these callbacks as part of the softirq chain can lead to undesired situations, such as contention for CPU resources with other SCHED_OTHER tasks when executed within ksoftirqd.”h]”hXsRCU callbacks are invoked by default in softirq context. Their execution is important because, depending on the use case, they either free memory or ensure progress in state transitions. Running these callbacks as part of the softirq chain can lead to undesired situations, such as contention for CPU resources with other SCHED_OTHER tasks when executed within ksoftirqd.”…””}”(hj…hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K²hjthžhubhê)”}”(hX To avoid running callbacks in softirq context, the RCU subsystem provides a mechanism to execute them in process context instead. This behavior can be enabled by setting the boot command-line parameter rcutree.use_softirq=0. This setting is enforced in kernels configured with PREEMPT_RT.”h]”hX To avoid running callbacks in softirq context, the RCU subsystem provides a mechanism to execute them in process context instead. This behavior can be enabled by setting the boot command-line parameter rcutree.use_softirq=0. This setting is enforced in kernels configured with PREEMPT_RT.”…””}”(hj“hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K¸hjthžhubeh}”(h]”Œ rcu-callbacks”ah ]”h"]”Œ rcu callbacks”ah$]”h&]”uh1h´hj™hžhhŸh³h K°ubeh}”(h]”Œexecution-context”ah ]”h"]”Œexecution context”ah$]”h&]”uh1h´hh¶hžhhŸh³h K(ubhµ)”}”(hhh]”(hº)”}”(hŒSpin until ready”h]”hŒSpin until ready”…””}”(hj´hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hj±hžhhŸh³h K¾ubhê)”}”(hXkThe "spin until ready" pattern involves repeatedly checking (spinning on) the state of a data structure until it becomes available. This pattern assumes that preemption, soft interrupts, or interrupts are disabled. If the data structure is marked busy, it is presumed to be in use by another CPU, and spinning should eventually succeed as that CPU makes progress.”h]”hXoThe “spin until ready†pattern involves repeatedly checking (spinning on) the state of a data structure until it becomes available. This pattern assumes that preemption, soft interrupts, or interrupts are disabled. If the data structure is marked busy, it is presumed to be in use by another CPU, and spinning should eventually succeed as that CPU makes progress.”…””}”(hjÂhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KÀhj±hžhubhê)”}”(hXISome examples are hrtimer_cancel() or timer_delete_sync(). These functions cancel timers that execute with interrupts or soft interrupts disabled. If a thread attempts to cancel a timer and finds it active, spinning until the callback completes is safe because the callback can only run on another CPU and will eventually finish.”h]”hXISome examples are hrtimer_cancel() or timer_delete_sync(). These functions cancel timers that execute with interrupts or soft interrupts disabled. If a thread attempts to cancel a timer and finds it active, spinning until the callback completes is safe because the callback can only run on another CPU and will eventually finish.”…””}”(hjÐhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KÆhj±hžhubhê)”}”(hXeOn PREEMPT_RT kernels, however, timer callbacks run in thread context. This introduces a challenge: a higher-priority thread attempting to cancel the timer may preempt the timer callback thread. Since the scheduler cannot migrate the callback thread to another CPU due to affinity constraints, spinning can result in livelock even on multiprocessor systems.”h]”hXeOn PREEMPT_RT kernels, however, timer callbacks run in thread context. This introduces a challenge: a higher-priority thread attempting to cancel the timer may preempt the timer callback thread. Since the scheduler cannot migrate the callback thread to another CPU due to affinity constraints, spinning can result in livelock even on multiprocessor systems.”…””}”(hjÞhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KÌhj±hžhubhê)”}”(hŒõTo avoid this, both the canceling and callback sides must use a handshake mechanism that supports priority inheritance. This allows the canceling thread to suspend until the callback completes, ensuring forward progress without risking livelock.”h]”hŒõTo avoid this, both the canceling and callback sides must use a handshake mechanism that supports priority inheritance. This allows the canceling thread to suspend until the callback completes, ensuring forward progress without risking livelock.”…””}”(hjìhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KÒhj±hžhubhê)”}”(hŒ©In order to solve the problem at the API level, the sequence locks were extended to allow a proper handover between the the spinning reader and the maybe blocked writer.”h]”hŒ©In order to solve the problem at the API level, the sequence locks were extended to allow a proper handover between the the spinning reader and the maybe blocked writer.”…””}”(hjúhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h K×hj±hžhubhµ)”}”(hhh]”(hº)”}”(hŒSequence locks”h]”hŒSequence locks”…””}”(hj hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¹hjhžhhŸh³h KÜubhê)”}”(hŒ[Sequence counters and sequential locks are documented in Documentation/locking/seqlock.rst.”h]”hŒ[Sequence counters and sequential locks are documented in Documentation/locking/seqlock.rst.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h KÞhjhžhubhê)”}”(hX"The interface has been extended to ensure proper preemption states for the writer and spinning reader contexts. This is achieved by embedding the writer serialization lock directly into the sequence counter type, resulting in composite types such as seqcount_spinlock_t or seqcount_mutex_t.”h]”hX"The interface has been extended to ensure proper preemption states for the writer and spinning reader contexts. This is achieved by embedding the writer serialization lock directly into the sequence counter type, resulting in composite types such as seqcount_spinlock_t or seqcount_mutex_t.”…””}”(hj'hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Káhjhžhubhê)”}”(hŒ¼These composite types allow readers to detect an ongoing write and actively boost the writer’s priority to help it complete its update instead of spinning and waiting for its completion.”h]”hŒ¼These composite types allow readers to detect an ongoing write and actively boost the writer’s priority to help it complete its update instead of spinning and waiting for its completion.”…””}”(hj5hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Kæhjhžhubhê)”}”(hX„If the plain seqcount_t is used, extra care must be taken to synchronize the reader with the writer during updates. The writer must ensure its update is serialized and non-preemptible relative to the reader. This cannot be achieved using a regular spinlock_t because spinlock_t on PREEMPT_RT does not disable preemption. In such cases, using seqcount_spinlock_t is the preferred solution.”h]”hX„If the plain seqcount_t is used, extra care must be taken to synchronize the reader with the writer during updates. The writer must ensure its update is serialized and non-preemptible relative to the reader. This cannot be achieved using a regular spinlock_t because spinlock_t on PREEMPT_RT does not disable preemption. In such cases, using seqcount_spinlock_t is the preferred solution.”…””}”(hjChžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Kêhjhžhubhê)”}”(hŒ°However, if there is no spinning involved i.e., if the reader only needs to detect whether a write has started and not serialize against it then using seqcount_t is reasonable.”h]”hŒ°However, if there is no spinning involved i.e., if the reader only needs to detect whether a write has started and not serialize against it then using seqcount_t is reasonable.”…””}”(hjQhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1héhŸh³h Kðhjhžhubeh}”(h]”Œsequence-locks”ah ]”h"]”Œsequence locks”ah$]”h&]”uh1h´hj±hžhhŸh³h KÜubeh}”(h]”Œspin-until-ready”ah ]”h"]”Œspin until ready”ah$]”h&]”uh1h´hh¶hžhhŸh³h K¾ubeh}”(h]”Œhow-realtime-kernels-differ”ah ]”h"]”Œhow realtime kernels differ”ah$]”h&]”uh1h´hhhžhhŸh³h Kubeh}”(h]”h ]”h"]”h$]”h&]”Œsource”h³uh1hŒcurrent_source”NŒ current_line”NŒsettings”Œdocutils.frontend”ŒValues”“”)”}”(h¹NŒ generator”NŒ datestamp”NŒ source_link”NŒ source_url”NŒ toc_backlinks”Œentry”Œfootnote_backlinks”KŒ sectnum_xform”KŒstrip_comments”NŒstrip_elements_with_classes”NŒ strip_classes”NŒ report_level”KŒ halt_level”KŒexit_status_level”KŒdebug”NŒwarning_stream”NŒ traceback”ˆŒinput_encoding”Œ utf-8-sig”Œinput_encoding_error_handler”Œstrict”Œoutput_encoding”Œutf-8”Œoutput_encoding_error_handler”jšŒerror_encoding”Œutf-8”Œerror_encoding_error_handler”Œbackslashreplace”Œ language_code”Œen”Œrecord_dependencies”NŒconfig”NŒ id_prefix”hŒauto_id_prefix”Œid”Œ dump_settings”NŒdump_internals”NŒdump_transforms”NŒdump_pseudo_xml”NŒexpose_internals”NŒstrict_visitor”NŒ_disable_config”NŒ_source”h³Œ _destination”NŒ _config_files”]”Œ7/var/lib/git/docbuild/linux/Documentation/docutils.conf”aŒfile_insertion_enabled”ˆŒ raw_enabled”KŒline_length_limit”M'Œpep_references”NŒ pep_base_url”Œhttps://peps.python.org/”Œpep_file_url_template”Œpep-%04d”Œrfc_references”NŒ rfc_base_url”Œ&https://datatracker.ietf.org/doc/html/”Œ tab_width”KŒtrim_footnote_reference_space”‰Œsyntax_highlight”Œlong”Œ smart_quotes”ˆŒsmartquotes_locales”]”Œcharacter_level_inline_markup”‰Œdoctitle_xform”‰Œ docinfo_xform”KŒsectsubtitle_xform”‰Œ image_loading”Œlink”Œembed_stylesheet”‰Œcloak_email_addresses”ˆŒsection_self_link”‰Œenv”NubŒreporter”NŒindirect_targets”]”Œsubstitution_defs”}”Œsubstitution_names”}”Œrefnames”}”Œrefids”}”Œnameids”}”(jtjqjSjPj–j“j®j«jjjIjFjšj—jÏjÌj jjqjnj¦j£jljijdjauŒ nametypes”}”(jt‰jS‰j–‰j®‰j‰jI‰jš‰jωj ‰jq‰j¦‰jl‰jd‰uh}”(jqh¶jPj!j“jVj«j™jjÆjFj j—jLjÌjjjÒjnj#j£jtjij±jajuŒ footnote_refs”}”Œ citation_refs”}”Œ autofootnotes”]”Œautofootnote_refs”]”Œsymbol_footnotes”]”Œsymbol_footnote_refs”]”Œ footnotes”]”Œ citations”]”Œautofootnote_start”KŒsymbol_footnote_start”KŒ id_counter”Œ collections”ŒCounter”“”}”…”R”Œparse_messages”]”Œtransform_messages”]”Œ transformer”NŒ include_log”]”Œ decoration”Nhžhub.