€•hKŒsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ,/translations/zh_CN/core-api/irq/managed_irq”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ,/translations/zh_TW/core-api/irq/managed_irq”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ,/translations/it_IT/core-api/irq/managed_irq”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ,/translations/ja_JP/core-api/irq/managed_irq”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ,/translations/ko_KR/core-api/irq/managed_irq”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒPortuguese (Brazilian)”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ,/translations/pt_BR/core-api/irq/managed_irq”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh–sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ,/translations/sp_SP/core-api/irq/managed_irq”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒcomment”“”)”}”(hŒ SPDX-License-Identifier: GPL-2.0”h]”hŒ SPDX-License-Identifier: GPL-2.0”…””}”hh·sbah}”(h]”h ]”h"]”h$]”h&]”Œ xml:space”Œpreserve”uh1hµhhh²hh³ŒF/var/lib/git/docbuild/linux/Documentation/core-api/irq/managed_irq.rst”h´KubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒAffinity managed interrupts”h]”hŒAffinity managed interrupts”…””}”(hhÏh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhhÊh²hh³hÇh´KubhŒ paragraph”“”)”}”(hŒõThe IRQ core provides support for managing interrupts according to a specified CPU affinity. Under normal operation, an interrupt is associated with a particular CPU. If that CPU is taken offline, the interrupt is migrated to another online CPU.”h]”hŒõThe IRQ core provides support for managing interrupts according to a specified CPU affinity. Under normal operation, an interrupt is associated with a particular CPU. If that CPU is taken offline, the interrupt is migrated to another online CPU.”…””}”(hhßh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KhhÊh²hubhÞ)”}”(hXYDevices with large numbers of interrupt vectors can stress the available vector space. For example, an NVMe device with 128 I/O queues typically requests one interrupt per queue on systems with at least 128 CPUs. Two such devices therefore request 256 interrupts. On x86, the interrupt vector space is notoriously low, providing only 256 vectors per CPU, and the kernel reserves a subset of these, further reducing the number available for device interrupts. In practice this is not an issue because the interrupts are distributed across many CPUs, so each CPU only receives a small number of vectors.”h]”hXYDevices with large numbers of interrupt vectors can stress the available vector space. For example, an NVMe device with 128 I/O queues typically requests one interrupt per queue on systems with at least 128 CPUs. Two such devices therefore request 256 interrupts. On x86, the interrupt vector space is notoriously low, providing only 256 vectors per CPU, and the kernel reserves a subset of these, further reducing the number available for device interrupts. In practice this is not an issue because the interrupts are distributed across many CPUs, so each CPU only receives a small number of vectors.”…””}”(hhíh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K hhÊh²hubhÞ)”}”(hŒñDuring system suspend, however, all secondary CPUs are taken offline and all interrupts are migrated to the single CPU that remains online. This can exhaust the available interrupt vectors on that CPU and cause the suspend operation to fail.”h]”hŒñDuring system suspend, however, all secondary CPUs are taken offline and all interrupts are migrated to the single CPU that remains online. This can exhaust the available interrupt vectors on that CPU and cause the suspend operation to fail.”…””}”(hhûh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KhhÊh²hubhÞ)”}”(hXIAffinity‑managed interrupts address this limitation. Each interrupt is assigned a CPU affinity mask that specifies the set of CPUs on which the interrupt may be targeted. When a CPU in the mask goes offline, the interrupt is moved to the next CPU in the mask. If the last CPU in the mask goes offline, the interrupt is shut down. Drivers using affinity‑managed interrupts must ensure that the associated queue is quiesced before the interrupt is disabled so that no further interrupts are generated. When a CPU in the affinity mask comes back online, the interrupt is re‑enabled.”h]”hXIAffinity‑managed interrupts address this limitation. Each interrupt is assigned a CPU affinity mask that specifies the set of CPUs on which the interrupt may be targeted. When a CPU in the mask goes offline, the interrupt is moved to the next CPU in the mask. If the last CPU in the mask goes offline, the interrupt is shut down. Drivers using affinity‑managed interrupts must ensure that the associated queue is quiesced before the interrupt is disabled so that no further interrupts are generated. When a CPU in the affinity mask comes back online, the interrupt is re‑enabled.”…””}”(hj h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KhhÊh²hubhÉ)”}”(hhh]”(hÎ)”}”(hŒImplementation”h]”hŒImplementation”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´K$ubhÞ)”}”(hXDDevices must provide per‑instance interrupts, such as per‑I/O‑queue interrupts for storage devices like NVMe. The driver allocates interrupt vectors with the required affinity settings using struct irq_affinity. For MSI‑X devices, this is done via pci_alloc_irq_vectors_affinity() with the PCI_IRQ_AFFINITY flag set.”h]”hXDDevices must provide per‑instance interrupts, such as per‑I/O‑queue interrupts for storage devices like NVMe. The driver allocates interrupt vectors with the required affinity settings using struct irq_affinity. For MSI‑X devices, this is done via pci_alloc_irq_vectors_affinity() with the PCI_IRQ_AFFINITY flag set.”…””}”(hj(h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K&hjh²hubhÞ)”}”(hŒúBased on the provided affinity information, the IRQ core attempts to spread the interrupts evenly across the system. The affinity masks are computed during this allocation step, but the final IRQ assignment is performed when request_irq() is invoked.”h]”hŒúBased on the provided affinity information, the IRQ core attempts to spread the interrupts evenly across the system. The affinity masks are computed during this allocation step, but the final IRQ assignment is performed when request_irq() is invoked.”…””}”(hj6h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K,hjh²hubeh}”(h]”Œimplementation”ah ]”h"]”Œimplementation”ah$]”h&]”uh1hÈhhÊh²hh³hÇh´K$ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ Isolated CPUs”h]”hŒ Isolated CPUs”…””}”(hjOh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjLh²hh³hÇh´K2ubhÞ)”}”(hXëThe affinity of managed interrupts is handled entirely in the kernel and cannot be modified from user space through the /proc interfaces. The managed_irq sub‑parameter of the isolcpus boot option specifies a CPU mask that managed interrupts should attempt to avoid. This isolation is best‑effort and only applies if the automatically assigned interrupt mask also contains online CPUs outside the avoided mask. If the requested mask contains only isolated CPUs, the setting has no effect.”h]”hXëThe affinity of managed interrupts is handled entirely in the kernel and cannot be modified from user space through the /proc interfaces. The managed_irq sub‑parameter of the isolcpus boot option specifies a CPU mask that managed interrupts should attempt to avoid. This isolation is best‑effort and only applies if the automatically assigned interrupt mask also contains online CPUs outside the avoided mask. If the requested mask contains only isolated CPUs, the setting has no effect.”…””}”(hj]h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K4hjLh²hubhÞ)”}”(hŒäCPUs listed in the avoided mask remain part of the interrupt’s affinity mask. This means that if all non‑isolated CPUs go offline while isolated CPUs remain online, the interrupt will be assigned to one of the isolated CPUs.”h]”hŒäCPUs listed in the avoided mask remain part of the interrupt’s affinity mask. This means that if all non‑isolated CPUs go offline while isolated CPUs remain online, the interrupt will be assigned to one of the isolated CPUs.”…””}”(hjkh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K