€• JŒsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ+/translations/zh_CN/locking/preempt-locking”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ+/translations/zh_TW/locking/preempt-locking”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ+/translations/it_IT/locking/preempt-locking”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ+/translations/ja_JP/locking/preempt-locking”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ+/translations/ko_KR/locking/preempt-locking”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ+/translations/sp_SP/locking/preempt-locking”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒKProper Locking Under a Preemptible Kernel: Keeping Kernel Code Preempt-Safe”h]”hŒKProper Locking Under a Preemptible Kernel: Keeping Kernel Code Preempt-Safe”…””}”(hh¨hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hh£hžhhŸŒE/var/lib/git/docbuild/linux/Documentation/locking/preempt-locking.rst”h KubhŒ field_list”“”)”}”(hhh]”hŒfield”“”)”}”(hhh]”(hŒ field_name”“”)”}”(hŒAuthor”h]”hŒAuthor”…””}”(hhÃhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÁhh¾hŸh¶h KubhŒ field_body”“”)”}”(hŒRobert Love ”h]”hŒ paragraph”“”)”}”(hŒRobert Love ”h]”(hŒ Robert Love <”…””}”(hhÙhžhhŸNh NubhŒ reference”“”)”}”(hŒ rml@tech9.net”h]”hŒ rml@tech9.net”…””}”(hhãhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”Œrefuri”Œmailto:rml@tech9.net”uh1háhhÙubhŒ>”…””}”(hhÙhžhhŸNh Nubeh}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h KhhÓubah}”(h]”h ]”h"]”h$]”h&]”uh1hÑhh¾ubeh}”(h]”h ]”h"]”h$]”h&]”uh1h¼hŸh¶h Khh¹hžhubah}”(h]”h ]”h"]”h$]”h&]”uh1h·hh£hžhhŸh¶h Kubh¢)”}”(hhh]”(h§)”}”(hŒ Introduction”h]”hŒ Introduction”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjhžhhŸh¶h K ubhØ)”}”(hX/A preemptible kernel creates new locking issues. The issues are the same as those under SMP: concurrency and reentrancy. Thankfully, the Linux preemptible kernel model leverages existing SMP locking mechanisms. Thus, the kernel requires explicit additional locking for very few additional situations.”h]”hX/A preemptible kernel creates new locking issues. The issues are the same as those under SMP: concurrency and reentrancy. Thankfully, the Linux preemptible kernel model leverages existing SMP locking mechanisms. Thus, the kernel requires explicit additional locking for very few additional situations.”…””}”(hj hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h K hjhžhubhØ)”}”(hŒmThis document is for all kernel hackers. Developing code in the kernel requires protecting these situations.”h]”hŒmThis document is for all kernel hackers. Developing code in the kernel requires protecting these situations.”…””}”(hj.hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h Khjhžhubh¢)”}”(hhh]”(h§)”}”(hŒ9RULE #1: Per-CPU data structures need explicit protection”h]”hŒ9RULE #1: Per-CPU data structures need explicit protection”…””}”(hj?hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hj<hžhhŸh¶h KubhØ)”}”(hŒ5Two similar problems arise. An example code snippet::”h]”hŒ4Two similar problems arise. An example code snippet:”…””}”(hjMhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h Khj<hžhubhŒ literal_block”“”)”}”(hŒ’struct this_needs_locking tux[NR_CPUS]; tux[smp_processor_id()] = some_value; /* task is preempted here... */ something = tux[smp_processor_id()];”h]”hŒ’struct this_needs_locking tux[NR_CPUS]; tux[smp_processor_id()] = some_value; /* task is preempted here... */ something = tux[smp_processor_id()];”…””}”hj]sbah}”(h]”h ]”h"]”h$]”h&]”Œ xml:space”Œpreserve”uh1j[hŸh¶h Khj<hžhubhØ)”}”(hX"First, since the data is per-CPU, it may not have explicit SMP locking, but require it otherwise. Second, when a preempted task is finally rescheduled, the previous value of smp_processor_id may not equal the current. You must protect these situations by disabling preemption around them.”h]”hX"First, since the data is per-CPU, it may not have explicit SMP locking, but require it otherwise. Second, when a preempted task is finally rescheduled, the previous value of smp_processor_id may not equal the current. You must protect these situations by disabling preemption around them.”…””}”(hjmhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h K hj<hžhubhØ)”}”(hŒHYou can also use put_cpu() and get_cpu(), which will disable preemption.”h]”hŒHYou can also use put_cpu() and get_cpu(), which will disable preemption.”…””}”(hj{hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h K%hj<hžhubeh}”(h]”Œ7rule-1-per-cpu-data-structures-need-explicit-protection”ah ]”h"]”Œ9rule #1: per-cpu data structures need explicit protection”ah$]”h&]”uh1h¡hjhžhhŸh¶h Kubh¢)”}”(hhh]”(h§)”}”(hŒ%RULE #2: CPU state must be protected.”h]”hŒ%RULE #2: CPU state must be protected.”…””}”(hj”hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hj‘hžhhŸh¶h K)ubhØ)”}”(hXPUnder preemption, the state of the CPU must be protected. This is arch- dependent, but includes CPU structures and state not preserved over a context switch. For example, on x86, entering and exiting FPU mode is now a critical section that must occur while preemption is disabled. Think what would happen if the kernel is executing a floating-point instruction and is then preempted. Remember, the kernel does not save FPU state except for user tasks. Therefore, upon preemption, the FPU registers will be sold to the lowest bidder. Thus, preemption must be disabled around such regions.”h]”hXPUnder preemption, the state of the CPU must be protected. This is arch- dependent, but includes CPU structures and state not preserved over a context switch. For example, on x86, entering and exiting FPU mode is now a critical section that must occur while preemption is disabled. Think what would happen if the kernel is executing a floating-point instruction and is then preempted. Remember, the kernel does not save FPU state except for user tasks. Therefore, upon preemption, the FPU registers will be sold to the lowest bidder. Thus, preemption must be disabled around such regions.”…””}”(hj¢hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h K,hj‘hžhubhØ)”}”(hŒ“Note, some FPU functions are already explicitly preempt safe. For example, kernel_fpu_begin and kernel_fpu_end will disable and enable preemption.”h]”hŒ“Note, some FPU functions are already explicitly preempt safe. For example, kernel_fpu_begin and kernel_fpu_end will disable and enable preemption.”…””}”(hj°hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h K5hj‘hžhubeh}”(h]”Œ"rule-2-cpu-state-must-be-protected”ah ]”h"]”Œ%rule #2: cpu state must be protected.”ah$]”h&]”uh1h¡hjhžhhŸh¶h K)ubh¢)”}”(hhh]”(h§)”}”(hŒ@RULE #3: Lock acquire and release must be performed by same task”h]”hŒ@RULE #3: Lock acquire and release must be performed by same task”…””}”(hjÉhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjÆhžhhŸh¶h K:ubhØ)”}”(hX@A lock acquired in one task must be released by the same task. This means you can't do oddball things like acquire a lock and go off to play while another task releases it. If you want to do something like this, acquire and release the task in the same code path and have the caller wait on an event by the other task.”h]”hXBA lock acquired in one task must be released by the same task. This means you can’t do oddball things like acquire a lock and go off to play while another task releases it. If you want to do something like this, acquire and release the task in the same code path and have the caller wait on an event by the other task.”…””}”(hj×hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h K=hjÆhžhubeh}”(h]”Œ>rule-3-lock-acquire-and-release-must-be-performed-by-same-task”ah ]”h"]”Œ@rule #3: lock acquire and release must be performed by same task”ah$]”h&]”uh1h¡hjhžhhŸh¶h K:ubeh}”(h]”Œ introduction”ah ]”h"]”Œ introduction”ah$]”h&]”uh1h¡hh£hžhhŸh¶h K ubh¢)”}”(hhh]”(h§)”}”(hŒSolution”h]”hŒSolution”…””}”(hjøhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjõhžhhŸh¶h KEubhØ)”}”(hŒmData protection under preemption is achieved by disabling preemption for the duration of the critical region.”h]”hŒmData protection under preemption is achieved by disabling preemption for the duration of the critical region.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h KHhjõhžhubj\)”}”(hX,preempt_enable() decrement the preempt counter preempt_disable() increment the preempt counter preempt_enable_no_resched() decrement, but do not immediately preempt preempt_check_resched() if needed, reschedule preempt_count() return the preempt counter”h]”hX,preempt_enable() decrement the preempt counter preempt_disable() increment the preempt counter preempt_enable_no_resched() decrement, but do not immediately preempt preempt_check_resched() if needed, reschedule preempt_count() return the preempt counter”…””}”hjsbah}”(h]”h ]”h"]”h$]”h&]”jkjluh1j[hŸh¶h KMhjõhžhubhØ)”}”(hŒõThe functions are nestable. In other words, you can call preempt_disable n-times in a code path, and preemption will not be reenabled until the n-th call to preempt_enable. The preempt statements define to nothing if preemption is not enabled.”h]”hŒõThe functions are nestable. In other words, you can call preempt_disable n-times in a code path, and preemption will not be reenabled until the n-th call to preempt_enable. The preempt statements define to nothing if preemption is not enabled.”…””}”(hj"hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h KShjõhžhubhØ)”}”(hŒ«Note that you do not need to explicitly prevent preemption if you are holding any locks or interrupts are disabled, since preemption is implicitly disabled in those cases.”h]”hŒ«Note that you do not need to explicitly prevent preemption if you are holding any locks or interrupts are disabled, since preemption is implicitly disabled in those cases.”…””}”(hj0hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h KXhjõhžhubhØ)”}”(hXÏBut keep in mind that 'irqs disabled' is a fundamentally unsafe way of disabling preemption - any cond_resched() or cond_resched_lock() might trigger a reschedule if the preempt count is 0. A simple printk() might trigger a reschedule. So use this implicit preemption-disabling property only if you know that the affected codepath does not do any of this. Best policy is to use this only for small, atomic code that you wrote and which calls no complex functions.”h]”hXÓBut keep in mind that ‘irqs disabled’ is a fundamentally unsafe way of disabling preemption - any cond_resched() or cond_resched_lock() might trigger a reschedule if the preempt count is 0. A simple printk() might trigger a reschedule. So use this implicit preemption-disabling property only if you know that the affected codepath does not do any of this. Best policy is to use this only for small, atomic code that you wrote and which calls no complex functions.”…””}”(hj>hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h K\hjõhžhubhØ)”}”(hŒ Example::”h]”hŒExample:”…””}”(hjLhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h Kdhjõhžhubj\)”}”(hŒÓcpucache_t *cc; /* this is per-CPU */ preempt_disable(); cc = cc_data(searchp); if (cc && cc->avail) { __free_block(searchp, cc_entry(cc), cc->avail); cc->avail = 0; } preempt_enable(); return 0;”h]”hŒÓcpucache_t *cc; /* this is per-CPU */ preempt_disable(); cc = cc_data(searchp); if (cc && cc->avail) { __free_block(searchp, cc_entry(cc), cc->avail); cc->avail = 0; } preempt_enable(); return 0;”…””}”hjZsbah}”(h]”h ]”h"]”h$]”h&]”jkjluh1j[hŸh¶h KfhjõhžhubhØ)”}”(hŒqNotice how the preemption statements must encompass every reference of the critical variables. Another example::”h]”hŒpNotice how the preemption statements must encompass every reference of the critical variables. Another example:”…””}”(hjhhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h Kphjõhžhubj\)”}”(hŒ‚int buf[NR_CPUS]; set_cpu_val(buf); if (buf[smp_processor_id()] == -1) printf(KERN_INFO "wee!\n"); spin_lock(&buf_lock); /* ... */”h]”hŒ‚int buf[NR_CPUS]; set_cpu_val(buf); if (buf[smp_processor_id()] == -1) printf(KERN_INFO "wee!\n"); spin_lock(&buf_lock); /* ... */”…””}”hjvsbah}”(h]”h ]”h"]”h$]”h&]”jkjluh1j[hŸh¶h KshjõhžhubhØ)”}”(hŒlThis code is not preempt-safe, but see how easily we can fix it by simply moving the spin_lock up two lines.”h]”hŒlThis code is not preempt-safe, but see how easily we can fix it by simply moving the spin_lock up two lines.”…””}”(hj„hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h Kyhjõhžhubeh}”(h]”Œsolution”ah ]”h"]”Œsolution”ah$]”h&]”uh1h¡hh£hžhhŸh¶h KEubh¢)”}”(hhh]”(h§)”}”(hŒ/Preventing preemption using interrupt disabling”h]”hŒ/Preventing preemption using interrupt disabling”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjšhžhhŸh¶h K~ubhØ)”}”(hXIt is possible to prevent a preemption event using local_irq_disable and local_irq_save. Note, when doing so, you must be very careful to not cause an event that would set need_resched and result in a preemption check. When in doubt, rely on locking or explicit preemption disabling.”h]”hXIt is possible to prevent a preemption event using local_irq_disable and local_irq_save. Note, when doing so, you must be very careful to not cause an event that would set need_resched and result in a preemption check. When in doubt, rely on locking or explicit preemption disabling.”…””}”(hj«hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h KhjšhžhubhØ)”}”(hŒANote in 2.5 interrupt disabling is now only per-CPU (e.g. local).”h]”hŒANote in 2.5 interrupt disabling is now only per-CPU (e.g. local).”…””}”(hj¹hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h K†hjšhžhubhØ)”}”(hX„An additional concern is proper usage of local_irq_disable and local_irq_save. These may be used to protect from preemption, however, on exit, if preemption may be enabled, a test to see if preemption is required should be done. If these are called from the spin_lock and read/write lock macros, the right thing is done. They may also be called within a spin-lock protected region, however, if they are ever called outside of this context, a test for preemption should be made. Do note that calls from interrupt context or bottom half/ tasklets are also protected by preemption locks and so may use the versions which do not check preemption.”h]”hX„An additional concern is proper usage of local_irq_disable and local_irq_save. These may be used to protect from preemption, however, on exit, if preemption may be enabled, a test to see if preemption is required should be done. If these are called from the spin_lock and read/write lock macros, the right thing is done. They may also be called within a spin-lock protected region, however, if they are ever called outside of this context, a test for preemption should be made. Do note that calls from interrupt context or bottom half/ tasklets are also protected by preemption locks and so may use the versions which do not check preemption.”…””}”(hjÇhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h×hŸh¶h Kˆhjšhžhubeh}”(h]”Œ/preventing-preemption-using-interrupt-disabling”ah ]”h"]”Œ/preventing preemption using interrupt disabling”ah$]”h&]”uh1h¡hh£hžhhŸh¶h K~ubeh}”(h]”ŒJproper-locking-under-a-preemptible-kernel-keeping-kernel-code-preempt-safe”ah ]”h"]”ŒKproper locking under a preemptible kernel: keeping kernel code preempt-safe”ah$]”h&]”uh1h¡hhhžhhŸh¶h Kubeh}”(h]”h ]”h"]”h$]”h&]”Œsource”h¶uh1hŒcurrent_source”NŒ current_line”NŒsettings”Œdocutils.frontend”ŒValues”“”)”}”(h¦NŒ generator”NŒ datestamp”NŒ source_link”NŒ source_url”NŒ toc_backlinks”Œentry”Œfootnote_backlinks”KŒ sectnum_xform”KŒstrip_comments”NŒstrip_elements_with_classes”NŒ strip_classes”NŒ report_level”KŒ halt_level”KŒexit_status_level”KŒdebug”NŒwarning_stream”NŒ traceback”ˆŒinput_encoding”Œ utf-8-sig”Œinput_encoding_error_handler”Œstrict”Œoutput_encoding”Œutf-8”Œoutput_encoding_error_handler”jŒerror_encoding”Œutf-8”Œerror_encoding_error_handler”Œbackslashreplace”Œ language_code”Œen”Œrecord_dependencies”NŒconfig”NŒ id_prefix”hŒauto_id_prefix”Œid”Œ dump_settings”NŒdump_internals”NŒdump_transforms”NŒdump_pseudo_xml”NŒexpose_internals”NŒstrict_visitor”NŒ_disable_config”NŒ_source”h¶Œ _destination”NŒ _config_files”]”Œ7/var/lib/git/docbuild/linux/Documentation/docutils.conf”aŒfile_insertion_enabled”ˆŒ raw_enabled”KŒline_length_limit”M'Œpep_references”NŒ pep_base_url”Œhttps://peps.python.org/”Œpep_file_url_template”Œpep-%04d”Œrfc_references”NŒ rfc_base_url”Œ&https://datatracker.ietf.org/doc/html/”Œ tab_width”KŒtrim_footnote_reference_space”‰Œsyntax_highlight”Œlong”Œ smart_quotes”ˆŒsmartquotes_locales”]”Œcharacter_level_inline_markup”‰Œdoctitle_xform”‰Œ docinfo_xform”KŒsectsubtitle_xform”‰Œ image_loading”Œlink”Œembed_stylesheet”‰Œcloak_email_addresses”ˆŒsection_self_link”‰Œenv”NubŒreporter”NŒindirect_targets”]”Œsubstitution_defs”}”Œsubstitution_names”}”Œrefnames”}”Œrefids”}”Œnameids”}”(jâjßjòjïjŽj‹jÃjÀjêjçj—j”jÚj×uŒ nametypes”}”(jâ‰jò‰jމjÉjê‰j—‰jÚ‰uh}”(jßh£jïjj‹j<jÀj‘jçjÆj”jõj×jšuŒ footnote_refs”}”Œ citation_refs”}”Œ autofootnotes”]”Œautofootnote_refs”]”Œsymbol_footnotes”]”Œsymbol_footnote_refs”]”Œ footnotes”]”Œ citations”]”Œautofootnote_start”KŒsymbol_footnote_start”KŒ id_counter”Œ collections”ŒCounter”“”}”…”R”Œparse_messages”]”Œtransform_messages”]”Œ transformer”NŒ include_log”]”Œ decoration”Nhžhub.