Esphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget&/translations/zh_CN/core-api/workqueuemodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/zh_TW/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/it_IT/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/ja_JP/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/ko_KR/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/sp_SP/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhsection)}(hhh](htitle)}(h Workqueueh]h Workqueue}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhh@/var/lib/git/docbuild/linux/Documentation/core-api/workqueue.rsthKubh field_list)}(hhh](hfield)}(hhh](h field_name)}(hDateh]hDate}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhKubh field_body)}(hSeptember, 2010h]h paragraph)}(hhh]hSeptember, 2010}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubah}(h]h ]h"]h$]h&]uh1hhhubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hAuthorh]hAuthor}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhKubh)}(hTejun Heo h]h)}(hjh](h Tejun Heo <}(hjhhhNhNubh reference)}(h tj@kernel.orgh]h tj@kernel.org}(hjhhhNhNubah}(h]h ]h"]h$]h&]refurimailto:tj@kernel.orguh1jhjubh>}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1hhhubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hAuthorh]hAuthor}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj6hhhKubh)}(h'Florian Mickler h]h)}(h%Florian Mickler h](hFlorian Mickler <}(hjKhhhNhNubj)}(hflorian@mickler.orgh]hflorian@mickler.org}(hjShhhNhNubah}(h]h ]h"]h$]h&]refurimailto:florian@mickler.orguh1jhjKubh>}(hjKhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjGubah}(h]h ]h"]h$]h&]uh1hhj6ubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubeh}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Introductionh]h Introduction}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK ubh)}(hThere are many cases where an asynchronous process execution context is needed and the workqueue (wq) API is the most commonly used mechanism for such cases.h]hThere are many cases where an asynchronous process execution context is needed and the workqueue (wq) API is the most commonly used mechanism for such cases.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hjhhubh)}(hWhen such an asynchronous execution context is needed, a work item describing which function to execute is put on a queue. An independent thread serves as the asynchronous execution context. The queue is called workqueue and the thread is called worker.h]hWhen such an asynchronous execution context is needed, a work item describing which function to execute is put on a queue. An independent thread serves as the asynchronous execution context. The queue is called workqueue and the thread is called worker.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXWhile there are work items on the workqueue the worker executes the functions associated with the work items one after the other. When there is no work item left on the workqueue the worker becomes idle. When a new work item gets queued, the worker begins executing again.h]hXWhile there are work items on the workqueue the worker executes the functions associated with the work items one after the other. When there is no work item left on the workqueue the worker becomes idle. When a new work item gets queued, the worker begins executing again.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] introductionah ]h"] introductionah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(h"Why Concurrency Managed Workqueue?h]h"Why Concurrency Managed Workqueue?}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXIn the original wq implementation, a multi threaded (MT) wq had one worker thread per CPU and a single threaded (ST) wq had one worker thread system-wide. A single MT wq needed to keep around the same number of workers as the number of CPUs. The kernel grew a lot of MT wq users over the years and with the number of CPU cores continuously rising, some systems saturated the default 32k PID space just booting up.h]hXIn the original wq implementation, a multi threaded (MT) wq had one worker thread per CPU and a single threaded (ST) wq had one worker thread system-wide. A single MT wq needed to keep around the same number of workers as the number of CPUs. The kernel grew a lot of MT wq users over the years and with the number of CPU cores continuously rising, some systems saturated the default 32k PID space just booting up.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXAlthough MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context.h]hXAlthough MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK'hjhhubh)}(hXThe tension between the provided level of concurrency and resource usage also forced its users to make unnecessary tradeoffs like libata choosing to use ST wq for polling PIOs and accepting an unnecessary limitation that no two polling PIOs can progress at the same time. As MT wq don't provide much better concurrency, users which require higher level of concurrency, like async or fscache, had to implement their own thread pool.h]hXThe tension between the provided level of concurrency and resource usage also forced its users to make unnecessary tradeoffs like libata choosing to use ST wq for polling PIOs and accepting an unnecessary limitation that no two polling PIOs can progress at the same time. As MT wq don’t provide much better concurrency, users which require higher level of concurrency, like async or fscache, had to implement their own thread pool.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK/hjhhubh)}(hcConcurrency Managed Workqueue (cmwq) is a reimplementation of wq with focus on the following goals.h]hcConcurrency Managed Workqueue (cmwq) is a reimplementation of wq with focus on the following goals.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK7hjhhubh bullet_list)}(hhh](h list_item)}(h8Maintain compatibility with the original workqueue API. h]h)}(h7Maintain compatibility with the original workqueue API.h]h7Maintain compatibility with the original workqueue API.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK:hjubah}(h]h ]h"]h$]h&]uh1jhj hhhhhNubj)}(hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource. h]h)}(hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource.h]hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource.}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK= 3, ::h](hAnd with cmwq with }(hjhhhNhNubj)}(h``@max_active``h]h @max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh >= 3,}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM%hj*hhubjr)}(hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 10 w2 starts and burns CPU 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishesh]hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 10 w2 starts and burns CPU 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishes}hjsbah}(h]h ]h"]h$]h&]jjuh1jqhhhM'hj*hhubh)}(hIf ``@max_active`` == 2, ::h](hIf }(hjhhhNhNubj)}(h``@max_active``h]h @max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh == 2,}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM3hj*hhubjr)}(hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 20 w2 starts and burns CPU 25 w2 sleeps 35 w2 wakes up and finishesh]hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 20 w2 starts and burns CPU 25 w2 sleeps 35 w2 wakes up and finishes}hjsbah}(h]h ]h"]h$]h&]jjuh1jqhhhM5hj*hhubh)}(hbNow, let's assume w1 and w2 are queued to a different wq q1 which has ``WQ_CPU_INTENSIVE`` set, ::h](hHNow, let’s assume w1 and w2 are queued to a different wq q1 which has }(hjhhhNhNubj)}(h``WQ_CPU_INTENSIVE``h]hWQ_CPU_INTENSIVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh set,}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMAhj*hhubjr)}(hXFTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 and w2 start and burn CPU 10 w1 sleeps 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishesh]hXFTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 and w2 start and burn CPU 10 w1 sleeps 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishes}hjsbah}(h]h ]h"]h$]h&]jjuh1jqhhhMDhj*hhubeh}(h]example-execution-scenariosah ]h"]example execution scenariosah$]h&]uh1hhhhhhhhM ubh)}(hhh](h)}(h Guidelinesh]h Guidelines}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMQubj )}(hhh](j)}(hXMDo not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work items which are used during memory reclaim. Each wq with ``WQ_MEM_RECLAIM`` set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with ``WQ_MEM_RECLAIM``. h]h)}(hXLDo not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work items which are used during memory reclaim. Each wq with ``WQ_MEM_RECLAIM`` set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with ``WQ_MEM_RECLAIM``.h](hDo not forget to use }(hj-hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubhT if a wq may process work items which are used during memory reclaim. Each wq with }(hj-hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubh set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with }(hj-hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubh.}(hj-hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMShj)ubah}(h]h ]h"]h$]h&]uh1jhj&hhhhhNubj)}(hCUnless strict ordering is required, there is no need to use ST wq. h]h)}(hBUnless strict ordering is required, there is no need to use ST wq.h]hBUnless strict ordering is required, there is no need to use ST wq.}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMZhjwubah}(h]h ]h"]h$]h&]uh1jhj&hhhhhNubj)}(hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit. h]h)}(hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit.h]hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM\hjubah}(h]h ]h"]h$]h&]uh1jhj&hhhhhNubj)}(hXA wq serves as a domain for forward progress guarantee (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items which are not involved in memory reclaim and don't need to be flushed as a part of a group of work items, and don't require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq. Note: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq. h](h)}(hXA wq serves as a domain for forward progress guarantee (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items which are not involved in memory reclaim and don't need to be flushed as a part of a group of work items, and don't require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq.h](h8A wq serves as a domain for forward progress guarantee (}(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhXE, flush and work item attributes. Work items which are not involved in memory reclaim and don’t need to be flushed as a part of a group of work items, and don’t require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM`hjubh)}(hNote: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq.h]hNote: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhhjubeh}(h]h ]h"]h$]h&]uh1jhj&hhhhhNubj)}(hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution. h]h)}(hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution.h]hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMmhjubah}(h]h ]h"]h$]h&]uh1jhj&hhhhhNubeh}(h]h ]h"]h$]h&]j`jauh1j hhhMShjhhubeh}(h] guidelinesah ]h"] guidelinesah$]h&]uh1hhhhhhhhMQubh)}(hhh](h)}(hAffinity Scopesh]hAffinity Scopes}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMsubh)}(hXAn unbound workqueue groups CPUs according to its affinity scope to improve cache locality. For example, if a workqueue is using the default affinity scope of "cache", it will group CPUs according to last level cache boundaries. A work item queued on the workqueue will be assigned to a worker on one of the CPUs which share the last level cache with the issuing CPU. Once started, the worker may or may not be allowed to move outside the scope depending on the ``affinity_strict`` setting of the scope.h](hXAn unbound workqueue groups CPUs according to its affinity scope to improve cache locality. For example, if a workqueue is using the default affinity scope of “cache”, it will group CPUs according to last level cache boundaries. A work item queued on the workqueue will be assigned to a worker on one of the CPUs which share the last level cache with the issuing CPU. Once started, the worker may or may not be allowed to move outside the scope depending on the }(hj hhhNhNubj)}(h``affinity_strict``h]haffinity_strict}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh setting of the scope.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMuhj hhubh)}(h;Workqueue currently supports the following affinity scopes.h]h;Workqueue currently supports the following affinity scopes.}(hj6 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM}hj hhubjS)}(hhh](jX)}(h``default`` Use the scope in module parameter ``workqueue.default_affinity_scope`` which is always set to one of the scopes below. h](j^)}(h ``default``h]j)}(hjM h]hdefault}(hjO hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjK ubah}(h]h ]h"]h$]h&]uh1j]hhhMhjG ubjw)}(hhh]h)}(hvUse the scope in module parameter ``workqueue.default_affinity_scope`` which is always set to one of the scopes below.h](h"Use the scope in module parameter }(hje hhhNhNubj)}(h$``workqueue.default_affinity_scope``h]h workqueue.default_affinity_scope}(hjm hhhNhNubah}(h]h ]h"]h$]h&]uh1jhje ubh0 which is always set to one of the scopes below.}(hje hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjb ubah}(h]h ]h"]h$]h&]uh1jvhjG ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhjD ubjX)}(h``cpu`` CPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management. h](j^)}(h``cpu``h]j)}(hj h]hcpu}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh]h)}(hCPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management.h]hCPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhjD hhubjX)}(h``smt`` CPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together. h](j^)}(h``smt``h]j)}(hj h]hsmt}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh]h)}(hCPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together.h]hCPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhjD hhubjX)}(h``cache`` CPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases. This is the default affinity scope. h](j^)}(h ``cache``h]j)}(hj h]hcache}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh]h)}(hCPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases. This is the default affinity scope.h]hCPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases. This is the default affinity scope.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhjD hhubjX)}(h8``numa`` CPUs are grouped according to NUMA boundaries. h](j^)}(h``numa``h]j)}(hj? h]hnuma}(hjA hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj= ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj9 ubjw)}(hhh]h)}(h.CPUs are grouped according to NUMA boundaries.h]h.CPUs are grouped according to NUMA boundaries.}(hjW hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjT ubah}(h]h ]h"]h$]h&]uh1jvhj9 ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhjD hhubjX)}(h``system`` All CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU. h](j^)}(h ``system``h]j)}(hjw h]hsystem}(hjy hhhNhNubah}(h]h ]h"]h$]h&]uh1jhju ubah}(h]h ]h"]h$]h&]uh1j]hhhMhjq ubjw)}(hhh]h)}(hwAll CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU.h]hwAll CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jvhjq ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhjD hhubeh}(h]h ]h"]h$]h&]uh1jRhj hhhhhNubh)}(hThe default affinity scope can be changed with the module parameter ``workqueue.default_affinity_scope`` and a specific workqueue's affinity scope can be changed using ``apply_workqueue_attrs()``.h](hDThe default affinity scope can be changed with the module parameter }(hj hhhNhNubj)}(h$``workqueue.default_affinity_scope``h]h workqueue.default_affinity_scope}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhB and a specific workqueue’s affinity scope can be changed using }(hj hhhNhNubj)}(h``apply_workqueue_attrs()``h]happly_workqueue_attrs()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hIf ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope related interface files under its ``/sys/devices/virtual/workqueue/WQ_NAME/`` directory.h](hIf }(hj hhhNhNubj)}(h ``WQ_SYSFS``h]hWQ_SYSFS}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh` is set, the workqueue will have the following affinity scope related interface files under its }(hj hhhNhNubj)}(h+``/sys/devices/virtual/workqueue/WQ_NAME/``h]h'/sys/devices/virtual/workqueue/WQ_NAME/}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh directory.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj hhubjS)}(hhh](jX)}(h``affinity_scope`` Read to see the current affinity scope. Write to change. When default is the current scope, reading this file will also show the current effective scope in parentheses, for example, ``default (cache)``. h](j^)}(h``affinity_scope``h]j)}(hj h]haffinity_scope}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh](h)}(h8Read to see the current affinity scope. Write to change.h]h8Read to see the current affinity scope. Write to change.}(hj4 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj1 ubh)}(hWhen default is the current scope, reading this file will also show the current effective scope in parentheses, for example, ``default (cache)``.h](h}When default is the current scope, reading this file will also show the current effective scope in parentheses, for example, }(hjB hhhNhNubj)}(h``default (cache)``h]hdefault (cache)}(hjJ hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjB ubh.}(hjB hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj1 ubeh}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhj ubjX)}(hX``affinity_strict`` 0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available. If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels. h](j^)}(h``affinity_strict``h]j)}(hjt h]haffinity_strict}(hjv hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjr ubah}(h]h ]h"]h$]h&]uh1j]hhhMhjn ubjw)}(hhh](h)}(hX0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available.h]hX0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubh)}(hX/If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels.h]hX/If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubeh}(h]h ]h"]h$]h&]uh1jvhjn ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhj hhubeh}(h]h ]h"]h$]h&]uh1jRhj hhhhhNubeh}(h]affinity-scopesah ]h"]affinity scopesah$]h&]uh1hhhhhhhhMsubh)}(hhh](h)}(hAffinity Scopes and Performanceh]hAffinity Scopes and Performance}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubh)}(hX%It'd be ideal if an unbound workqueue's behavior is optimal for vast majority of use cases without further tuning. Unfortunately, in the current kernel, there exists a pronounced trade-off between locality and utilization necessitating explicit configurations when workqueues are heavily used.h]hX)It’d be ideal if an unbound workqueue’s behavior is optimal for vast majority of use cases without further tuning. Unfortunately, in the current kernel, there exists a pronounced trade-off between locality and utilization necessitating explicit configurations when workqueues are heavily used.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hXcHigher locality leads to higher efficiency where more work is performed for the same number of consumed CPU cycles. However, higher locality may also cause lower overall system utilization if the work items are not spread enough across the affinity scopes by the issuers. The following performance testing with dm-crypt clearly illustrates this trade-off.h]hXcHigher locality leads to higher efficiency where more work is performed for the same number of consumed CPU cycles. However, higher locality may also cause lower overall system utilization if the work items are not spread enough across the affinity scopes by the issuers. The following performance testing with dm-crypt clearly illustrates this trade-off.c}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hXThe tests are run on a CPU with 12-cores/24-threads split across four L3 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency. ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and opened with ``cryptsetup`` with default settings.h](hThe tests are run on a CPU with 12-cores/24-threads split across four L3 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency. }(hj hhhNhNubj)}(h ``/dev/dm-0``h]h /dev/dm-0}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhL is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and opened with }(hj hhhNhNubj)}(h``cryptsetup``h]h cryptsetup}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh with default settings.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hhh](h)}(h=Scenario 1: Enough issuers and work spread across the machineh]h=Scenario 1: Enough issuers and work spread across the machine}(hj$ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj! hhhhhMubh)}(hThe command used: ::h]hThe command used:}(hj2 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj! hhubjr)}(h$ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \ --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \ --name=iops-test-job --verify=sha512h]h$ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \ --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \ --name=iops-test-job --verify=sha512}hj@ sbah}(h]h ]h"]h$]h&]jjuh1jqhhhMhj! hhubh)}(hXThere are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512`` makes ``fio`` generate and read back the content each time which makes execution locality matter between the issuer and ``kcryptd``. The following are the read bandwidths and CPU utilizations depending on different affinity scope settings on ``kcryptd`` measured over five runs. Bandwidths are in MiBps, and CPU util in percents.h](h8There are 24 issuers, each issuing 64 IOs concurrently. }(hjN hhhNhNubj)}(h``--verify=sha512``h]h--verify=sha512}(hjV hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjN ubh makes }(hjN hhhNhNubj)}(h``fio``h]hfio}(hjh hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjN ubhk generate and read back the content each time which makes execution locality matter between the issuer and }(hjN hhhNhNubj)}(h ``kcryptd``h]hkcryptd}(hjz hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjN ubho. The following are the read bandwidths and CPU utilizations depending on different affinity scope settings on }(hjN hhhNhNubj)}(h ``kcryptd``h]hkcryptd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjN ubhL measured over five runs. Bandwidths are in MiBps, and CPU util in percents.}(hjN hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj! hhubhtable)}(hhh]htgroup)}(hhh](hcolspec)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1j hj ubj )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1j hj ubj )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1j hj ubhthead)}(hhh]hrow)}(hhh](hentry)}(hhh]h)}(hAffinityh]hAffinity}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(hBandwidth (MiBps)h]hBandwidth (MiBps)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(h CPU util (%)h]h CPU util (%)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]uh1j hj ubah}(h]h ]h"]h$]h&]uh1j hj ubhtbody)}(hhh](j )}(hhh](j )}(hhh]h)}(hsystemh]hsystem}(hj4 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj1 ubah}(h]h ]h"]h$]h&]uh1j hj. ubj )}(hhh]h)}(h1159.40 ±1.34h]h1159.40 ±1.34}(hjK hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjH ubah}(h]h ]h"]h$]h&]uh1j hj. ubj )}(hhh]h)}(h 99.31 ±0.02h]h 99.31 ±0.02}(hjb hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj_ ubah}(h]h ]h"]h$]h&]uh1j hj. ubeh}(h]h ]h"]h$]h&]uh1j hj+ ubj )}(hhh](j )}(hhh]h)}(hcacheh]hcache}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj| ubj )}(hhh]h)}(h1166.40 ±0.89h]h1166.40 ±0.89}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj| ubj )}(hhh]h)}(h 99.34 ±0.01h]h 99.34 ±0.01}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj| ubeh}(h]h ]h"]h$]h&]uh1j hj+ ubj )}(hhh](j )}(hhh]h)}(hcache (strict)h]hcache (strict)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(h1166.00 ±0.71h]h1166.00 ±0.71}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(h 99.35 ±0.01h]h 99.35 ±0.01}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]uh1j hj+ ubeh}(h]h ]h"]h$]h&]uh1j) hj ubeh}(h]h ]h"]h$]h&]colsKuh1j hj ubah}(h]h ]colwidths-givenah"]h$]h&]uh1j hj! hhhNhNubh)}(hWith enough issuers spread across the system, there is no downside to "cache", strict or otherwise. All three configurations saturate the whole machine but the cache-affine ones outperform by 0.6% thanks to improved locality.h]hWith enough issuers spread across the system, there is no downside to “cache”, strict or otherwise. All three configurations saturate the whole machine but the cache-affine ones outperform by 0.6% thanks to improved locality.}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj! hhubeh}(h]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj;ubah}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh](j )}(hhh]h)}(hcacheh]hcache}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj[ubah}(h]h ]h"]h$]h&]uh1j hjXubj )}(hhh]h)}(h1154.40 ±1.14h]h1154.40 ±1.14}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjrubah}(h]h ]h"]h$]h&]uh1j hjXubj )}(hhh]h)}(h 96.15 ±0.09h]h 96.15 ±0.09}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjXubeh}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh](j )}(hhh]h)}(hcache (strict)h]hcache (strict)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h1112.00 ±4.64h]h1112.00 ±4.64}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h 93.26 ±0.35h]h 93.26 ±0.35}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j) hjubeh}(h]h ]h"]h$]h&]colsKuh1j hjubah}(h]h ]j(ah"]h$]h&]uh1j hjBhhhNhNubh)}(hThis is more than enough work to saturate the system. Both "system" and "cache" are nearly saturating the machine but not fully. "cache" is using less CPU but the better efficiency puts it at the same bandwidth as "system".h]hThis is more than enough work to saturate the system. Both “system” and “cache” are nearly saturating the machine but not fully. “cache” is using less CPU but the better efficiency puts it at the same bandwidth as “system”.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjBhhubh)}(hEight issuers moving around over four L3 cache scope still allow "cache (strict)" to mostly saturate the machine but the loss of work conservation is now starting to hurt with 3.7% bandwidth loss.h]hEight issuers moving around over four L3 cache scope still allow “cache (strict)” to mostly saturate the machine but the loss of work conservation is now starting to hurt with 3.7% bandwidth loss.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjBhhubeh}(h]3scenario-2-fewer-issuers-enough-work-for-saturationah ]h"]5scenario 2: fewer issuers, enough work for saturationah$]h&]uh1hhj hhhhhMubh)}(hhh](h)}(h;Scenario 3: Even fewer issuers, not enough work to saturateh]h;Scenario 3: Even fewer issuers, not enough work to saturate}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj+hhhhhM ubh)}(hThe command used: ::h]hThe command used:}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM"hj+hhubjr)}(h$ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \ --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \ --time_based --group_reporting --name=iops-test-job --verify=sha512h]h$ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \ --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \ --time_based --group_reporting --name=iops-test-job --verify=sha512}hjJsbah}(h]h ]h"]h$]h&]jjuh1jqhhhM$hj+hhubh)}(hAgain, the only difference is ``--numjobs=4``. With the number of issuers reduced to four, there now isn't enough work to saturate the whole system and the bandwidth becomes dependent on completion latencies.h](hAgain, the only difference is }(hjXhhhNhNubj)}(h``--numjobs=4``h]h --numjobs=4}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh. With the number of issuers reduced to four, there now isn’t enough work to saturate the whole system and the bandwidth becomes dependent on completion latencies.}(hjXhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM(hj+hhubj )}(hhh]j )}(hhh](j )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1j hj{ubj )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1j hj{ubj )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1j hj{ubj )}(hhh]j )}(hhh](j )}(hhh]h)}(hAffinityh]hAffinity}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM0hjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(hBandwidth (MiBps)h]hBandwidth (MiBps)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM1hjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h CPU util (%)h]h CPU util (%)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM2hjubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hjubah}(h]h ]h"]h$]h&]uh1j hj{ubj* )}(hhh](j )}(hhh](j )}(hhh]h)}(hsystemh]hsystem}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM4hjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h 993.60 ±1.82h]h 993.60 ±1.82}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM5hj ubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h 75.49 ±0.06h]h 75.49 ±0.06}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM6hj$ubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh](j )}(hhh]h)}(hcacheh]hcache}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM8hjDubah}(h]h ]h"]h$]h&]uh1j hjAubj )}(hhh]h)}(h 973.40 ±1.52h]h 973.40 ±1.52}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM9hj[ubah}(h]h ]h"]h$]h&]uh1j hjAubj )}(hhh]h)}(h 74.90 ±0.07h]h 74.90 ±0.07}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM:hjrubah}(h]h ]h"]h$]h&]uh1j hjAubeh}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh](j )}(hhh]h)}(hcache (strict)h]hcache (strict)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM<hjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h 828.20 ±4.49h]h 828.20 ±4.49}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM=hjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h 66.84 ±0.29h]h 66.84 ±0.29}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM>hjubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j) hj{ubeh}(h]h ]h"]h$]h&]colsKuh1j hjxubah}(h]h ]j(ah"]h$]h&]uh1j hj+hhhNhNubh)}(hNow, the tradeoff between locality and utilization is clearer. "cache" shows 2% bandwidth loss compared to "system" and "cache (struct)" whopping 20%.h]hNow, the tradeoff between locality and utilization is clearer. “cache” shows 2% bandwidth loss compared to “system” and “cache (struct)” whopping 20%.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM@hj+hhubeh}(h]9scenario-3-even-fewer-issuers-not-enough-work-to-saturateah ]h"];scenario 3: even fewer issuers, not enough work to saturateah$]h&]uh1hhj hhhhhM ubh)}(hhh](h)}(hConclusion and Recommendationsh]hConclusion and Recommendations}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMEubh)}(hXIn the above experiments, the efficiency advantage of the "cache" affinity scope over "system" is, while consistent and noticeable, small. However, the impact is dependent on the distances between the scopes and may be more pronounced in processors with more complex topologies.h]hXIn the above experiments, the efficiency advantage of the “cache” affinity scope over “system” is, while consistent and noticeable, small. However, the impact is dependent on the distances between the scopes and may be more pronounced in processors with more complex topologies.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhjhhubh)}(hWhile the loss of work-conservation in certain scenarios hurts, it is a lot better than "cache (strict)" and maximizing workqueue utilization is unlikely to be the common case anyway. As such, "cache" is the default affinity scope for unbound pools.h]hXWhile the loss of work-conservation in certain scenarios hurts, it is a lot better than “cache (strict)” and maximizing workqueue utilization is unlikely to be the common case anyway. As such, “cache” is the default affinity scope for unbound pools.}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjhhubj )}(hhh](j)}(hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using ``apply_workqueue_attrs()`` and/or enable ``WQ_SYSFS``. h]h)}(hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using ``apply_workqueue_attrs()`` and/or enable ``WQ_SYSFS``.h](hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using }(hj:hhhNhNubj)}(h``apply_workqueue_attrs()``h]happly_workqueue_attrs()}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubh and/or enable }(hj:hhhNhNubj)}(h ``WQ_SYSFS``h]hWQ_SYSFS}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubh.}(hj:hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMQhj6ubah}(h]h ]h"]h$]h&]uh1jhj3hhhhhNubj)}(hAn unbound workqueue with strict "cpu" affinity scope behaves the same as ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility. h]h)}(hAn unbound workqueue with strict "cpu" affinity scope behaves the same as ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility.h](hNAn unbound workqueue with strict “cpu” affinity scope behaves the same as }(hjvhhhNhNubj)}(h``WQ_CPU_INTENSIVE``h]hWQ_CPU_INTENSIVE}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubhu per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility.}(hjvhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMVhjrubah}(h]h ]h"]h$]h&]uh1jhj3hhhhhNubj)}(hrAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict "numa" affinity scope. h]h)}(hqAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict "numa" affinity scope.h]huAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict “numa” affinity scope.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMZhjubah}(h]h ]h"]h$]h&]uh1jhj3hhhhhNubj)}(hXRThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn't be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary. h]h)}(hXPThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn't be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary.h]hXRThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn’t be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM]hjubah}(h]h ]h"]h$]h&]uh1jhj3hhhhhNubeh}(h]h ]h"]h$]h&]j`jauh1j hhhMQhjhhubeh}(h]conclusion-and-recommendationsah ]h"]conclusion and recommendationsah$]h&]uh1hhj hhhhhMEubeh}(h]affinity-scopes-and-performanceah ]h"]affinity scopes and performanceah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(hExamining Configurationh]hExamining Configuration}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMeubh)}(hUse tools/workqueue/wq_dump.py to examine unbound CPU affinity configuration, worker pools and how workqueues map to the pools: ::h]hUse tools/workqueue/wq_dump.py to examine unbound CPU affinity configuration, worker pools and how workqueues map to the pools:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMghjhhubjr)}(hXb$ tools/workqueue/wq_dump.py Affinity Scopes =============== wq_unbound_cpumask=0000000f CPU nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 SMT nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 CACHE (default) nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 NUMA nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 SYSTEM nr_pods 1 pod_cpus [0]=0000000f pod_node [0]=-1 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0 Worker Pools ============ pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003 pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c Workqueue CPU -> pool ===================== [ workqueue \ CPU 0 1 2 3 dfl] events percpu 0 2 4 6 events_highpri percpu 1 3 5 7 events_long percpu 0 2 4 6 events_unbound unbound 9 9 10 10 8 events_freezable percpu 0 2 4 6 events_power_efficient percpu 0 2 4 6 events_freezable_pwr_ef percpu 0 2 4 6 rcu_gp percpu 0 2 4 6 rcu_par_gp percpu 0 2 4 6 slub_flushwq percpu 0 2 4 6 netns ordered 8 8 8 8 8 ...h]hXb$ tools/workqueue/wq_dump.py Affinity Scopes =============== wq_unbound_cpumask=0000000f CPU nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 SMT nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 CACHE (default) nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 NUMA nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 SYSTEM nr_pods 1 pod_cpus [0]=0000000f pod_node [0]=-1 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0 Worker Pools ============ pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003 pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c Workqueue CPU -> pool ===================== [ workqueue \ CPU 0 1 2 3 dfl] events percpu 0 2 4 6 events_highpri percpu 1 3 5 7 events_long percpu 0 2 4 6 events_unbound unbound 9 9 10 10 8 events_freezable percpu 0 2 4 6 events_power_efficient percpu 0 2 4 6 events_freezable_pwr_ef percpu 0 2 4 6 rcu_gp percpu 0 2 4 6 rcu_par_gp percpu 0 2 4 6 slub_flushwq percpu 0 2 4 6 netns ordered 8 8 8 8 8 ...}hjsbah}(h]h ]h"]h$]h&]jjuh1jqhhhMjhjhhubh)}(h-See the command's help message for more info.h]h/See the command’s help message for more info.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]examining-configurationah ]h"]examining configurationah$]h&]uh1hhhhhhhhMeubh)}(hhh](h)}(h Monitoringh]h Monitoring}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hhhhhMubh)}(hEUse tools/workqueue/wq_monitor.py to monitor workqueue operations: ::h]hBUse tools/workqueue/wq_monitor.py to monitor workqueue operations:}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj%hhubjr)}(hX$ tools/workqueue/wq_monitor.py events total infl CPUtime CPUhog CMW/RPR mayday rescued events 18545 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38306 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29598 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - total infl CPUtime CPUhog CMW/RPR mayday rescued events 18548 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38322 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29603 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - ...h]hX$ tools/workqueue/wq_monitor.py events total infl CPUtime CPUhog CMW/RPR mayday rescued events 18545 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38306 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29598 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - total infl CPUtime CPUhog CMW/RPR mayday rescued events 18548 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38322 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29603 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - ...}hjDsbah}(h]h ]h"]h$]h&]jjuh1jqhhhMhj%hhubh)}(h-See the command's help message for more info.h]h/See the command’s help message for more info.}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj%hhubeh}(h] monitoringah ]h"] monitoringah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h Debuggingh]h Debugging}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhhMubh)}(hBecause the work functions are executed by generic worker threads there are a few tricks needed to shed some light on misbehaving workqueue users.h]hBecause the work functions are executed by generic worker threads there are a few tricks needed to shed some light on misbehaving workqueue users.}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhhubh)}(h1Worker threads show up in the process list as: ::h]h.Worker threads show up in the process list as:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhhubjr)}(hX;root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0]h]hX;root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0]}hjsbah}(h]h ]h"]h$]h&]jjuh1jqhhhMhjhhhubh)}(h[If kworkers are going crazy (using too much cpu), there are two types of possible problems:h]h[If kworkers are going crazy (using too much cpu), there are two types of possible problems:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhhubjJ)}(hh1. Something being scheduled in rapid succession 2. A single work item that consumes lots of cpu cycles h]henumerated_list)}(hhh](j)}(h-Something being scheduled in rapid successionh]h)}(hjh]h-Something being scheduled in rapid succession}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h4A single work item that consumes lots of cpu cycles h]h)}(h3A single work item that consumes lots of cpu cyclesh]h3A single work item that consumes lots of cpu cycles}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]enumtypearabicprefixhsuffix.uh1jhjubah}(h]h ]h"]h$]h&]uh1jIhhhMhjhhhubh)}(h.The first one can be tracked using tracing: ::h]h+The first one can be tracked using tracing:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhhubjr)}(h$ echo workqueue:workqueue_queue_work > /sys/kernel/tracing/set_event $ cat /sys/kernel/tracing/trace_pipe > out.txt (wait a few secs) ^Ch]h$ echo workqueue:workqueue_queue_work > /sys/kernel/tracing/set_event $ cat /sys/kernel/tracing/trace_pipe > out.txt (wait a few secs) ^C}hjsbah}(h]h ]h"]h$]h&]jjuh1jqhhhMhjhhhubh)}(hIf something is busy looping on work queueing, it would be dominating the output and the offender can be determined with the work item function.h]hIf something is busy looping on work queueing, it would be dominating the output and the offender can be determined with the work item function.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhhubh)}(hvFor the second type of problems it should be possible to just check the stack trace of the offending worker thread. ::h]hsFor the second type of problems it should be possible to just check the stack trace of the offending worker thread.}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhhubjr)}(h'$ cat /proc/THE_OFFENDING_KWORKER/stackh]h'$ cat /proc/THE_OFFENDING_KWORKER/stack}hj2sbah}(h]h ]h"]h$]h&]jjuh1jqhhhMhjhhhubh)}(hHThe work item's function should be trivially visible in the stack trace.h]hJThe work item’s function should be trivially visible in the stack trace.}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhhubeh}(h] debuggingah ]h"] debuggingah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(hNon-reentrance Conditionsh]hNon-reentrance Conditions}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjVhhhhhMubh)}(hzWorkqueue guarantees that a work item cannot be re-entrant if the following conditions hold after a work item gets queued:h]hzWorkqueue guarantees that a work item cannot be re-entrant if the following conditions hold after a work item gets queued:}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjVhhubjJ)}(h1. The work function hasn't been changed. 2. No one queues the work item to another workqueue. 3. The work item hasn't been reinitiated. h]j)}(hhh](j)}(h&The work function hasn't been changed.h]h)}(hj~h]h(The work function hasn’t been changed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj|ubah}(h]h ]h"]h$]h&]uh1jhjyubj)}(h1No one queues the work item to another workqueue.h]h)}(hjh]h1No one queues the work item to another workqueue.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjyubj)}(h'The work item hasn't been reinitiated. h]h)}(h&The work item hasn't been reinitiated.h]h(The work item hasn’t been reinitiated.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjyubeh}(h]h ]h"]h$]h&]jjjhjjuh1jhjuubah}(h]h ]h"]h$]h&]uh1jIhhhMhjVhhubh)}(hIn other words, if the above conditions hold, the work item is guaranteed to be executed by at most one worker system-wide at any given time.h]hIn other words, if the above conditions hold, the work item is guaranteed to be executed by at most one worker system-wide at any given time.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjVhhubh)}(hNote that requeuing the work item (to the same queue) in the self function doesn't break these conditions, so it's safe to do. Otherwise, caution is required when breaking the conditions inside a work function.h]hNote that requeuing the work item (to the same queue) in the self function doesn’t break these conditions, so it’s safe to do. Otherwise, caution is required when breaking the conditions inside a work function.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjVhhubeh}(h]non-reentrance-conditionsah ]h"]non-reentrance conditionsah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h&Kernel Inline Documentations Referenceh]h&Kernel Inline Documentations Reference}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM ubhindex)}(hhh]h}(h]h ]h"]h$]h&]entries](singleworkqueue_attrs (C struct)c.workqueue_attrshNtauh1jhjhhhNhNubhdesc)}(hhh](hdesc_signature)}(hworkqueue_attrsh]hdesc_signature_line)}(hstruct workqueue_attrsh](hdesc_sig_keyword)}(hstructh]hstruct}(hj'hhhNhNubah}(h]h ]kah"]h$]h&]uh1j%hj!hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKubhdesc_sig_space)}(h h]h }(hj9hhhNhNubah}(h]h ]wah"]h$]h&]uh1j7hj!hhhj6hKubh desc_name)}(hworkqueue_attrsh]h desc_sig_name)}(hjh]hworkqueue_attrs}(hjPhhhNhNubah}(h]h ]nah"]h$]h&]uh1jNhjJubah}(h]h ](sig-namedescnameeh"]h$]h&]jjuh1jHhj!hhhj6hKubeh}(h]h ]h"]h$]h&]jj add_permalinkuh1jsphinx_line_type declaratorhjhhhj6hKubah}(h]jah ](sig sig-objecteh"]h$]h&] is_multiline _toc_parts) _toc_namehuh1jhj6hKhjhhubh desc_content)}(hhh]h)}(h"A struct for workqueue attributes.h]h"A struct for workqueue attributes.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhj|hhubah}(h]h ]h"]h$]h&]uh1jzhjhhhj6hKubeh}(h]h ](cstructeh"]h$]h&]domainjobjtypejdesctypejnoindex noindexentrynocontentsentryuh1jhhhjhNhNubh container)}(hX**Definition**:: struct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; }; **Members** ``nice`` nice level ``cpumask`` allowed CPUs Work items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**. ``__pod_cpumask`` internal attribute used to create per-pod pools Internal use only. Per-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**. ``affn_strict`` affinity scope is strict If clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside. If set, workers are only allowed to run inside **__pod_cpumask**. ``affn_scope`` unbound CPU affinity scope CPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node. ``ordered`` work items must be executed one by one in queueing orderh](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubjr)}(hstruct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; };h]hstruct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; };}hjsbah}(h]h ]h"]h$]h&]jjuh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubh)}(h **Members**h]j)}(hjh]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubjS)}(hhh](jX)}(h``nice`` nice level h](j^)}(h``nice``h]j)}(hjh]hnice}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubjw)}(hhh]h)}(h nice levelh]h nice level}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj hKhjubjX)}(h``cpumask`` allowed CPUs Work items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**. h](j^)}(h ``cpumask``h]j)}(hj.h]hcpumask}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhj(ubjw)}(hhh](h)}(h allowed CPUsh]h allowed CPUs}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjDubh)}(hWork items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**.h](hWork items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same }(hjVhhhNhNubj)}(h **cpumask**h]hcpumask}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubh.}(hjVhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjDubeh}(h]h ]h"]h$]h&]uh1jvhj(ubeh}(h]h ]h"]h$]h&]uh1jWhjChKhjubjX)}(hXh``__pod_cpumask`` internal attribute used to create per-pod pools Internal use only. Per-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**. h](j^)}(h``__pod_cpumask``h]j)}(hjh]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubjw)}(hhh](h)}(h/internal attribute used to create per-pod poolsh]h/internal attribute used to create per-pod pools}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubh)}(hInternal use only.h]hInternal use only.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubh)}(hXPer-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**.h](hPer-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh,’s. Whether the enforcement of a pool’s }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is strict depends on }(hjhhhNhNubj)}(h**affn_strict**h]h affn_strict}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubeh}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjubjX)}(hX``affn_strict`` affinity scope is strict If clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside. If set, workers are only allowed to run inside **__pod_cpumask**. h](j^)}(h``affn_strict``h]j)}(hjh]h affn_strict}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubjw)}(hhh](h)}(haffinity scope is stricth]haffinity scope is strict}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhj-ubh)}(hIf clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside.h](hRIf clear, workqueue will make a best-effort attempt at starting the worker inside }(hj?hhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubh1 but the scheduler is free to migrate it outside.}(hj?hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhj-ubh)}(hAIf set, workers are only allowed to run inside **__pod_cpumask**.h](h/If set, workers are only allowed to run inside }(hj`hhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubh.}(hj`hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj,hKhj-ubeh}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj,hKhjubjX)}(hX``affn_scope`` unbound CPU affinity scope CPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node. h](j^)}(h``affn_scope``h]j)}(hjh]h affn_scope}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubjw)}(hhh](h)}(hunbound CPU affinity scopeh]hunbound CPU affinity scope}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubh)}(hXeCPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node.h](hXCPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting }(hjhhhNhNubj)}(h``WQ_AFFN_NUMA``h]h WQ_AFFN_NUMA}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhC makes the workqueue use a separate worker pool for each NUMA node.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubeh}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjubjX)}(hD``ordered`` work items must be executed one by one in queueing orderh](j^)}(h ``ordered``h]j)}(hjh]hordered}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubjw)}(hhh]h)}(h8work items must be executed one by one in queueing orderh]h8work items must be executed one by one in queueing order}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjubeh}(h]h ]h"]h$]h&]uh1jRhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubh)}(h**Description**h]j)}(hj0h]h Description}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjhhubh)}(h>This can be used to change attributes of an unbound workqueue.h]h>This can be used to change attributes of an unbound workqueue.}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhKhjhhubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwork_pending (C macro)c.work_pendinghNtauh1jhjhhhNhNubj)}(hhh](j)}(h work_pendingh]j )}(h work_pendingh]jI)}(h work_pendingh]jO)}(hjhh]h work_pending}(hjrhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjnubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjjhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMbubah}(h]h ]h"]h$]h&]jjjluh1jjmjnhjfhhhjhMbubah}(h]jaah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMbhjchhubj{)}(hhh]h}(h]h ]h"]h$]h&]uh1jzhjchhhjhMbubeh}(h]h ](jmacroeh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubh)}(h``work_pending (work)``h]j)}(hjh]hwork_pending (work)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMdhjhhubjJ)}(h2Find out whether a work item is currently pending h]h)}(h1Find out whether a work item is currently pendingh]h1Find out whether a work item is currently pending}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM_hjubah}(h]h ]h"]h$]h&]uh1jIhjhM_hjhhubj)}(h4**Parameters** ``work`` The work item in questionh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMchjubjS)}(hhh]jX)}(h"``work`` The work item in questionh](j^)}(h``work``h]j)}(hjh]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMehjubjw)}(hhh]h)}(hThe work item in questionh]hThe work item in question}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM`hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj hMehjubah}(h]h ]h"]h$]h&]uh1jRhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdelayed_work_pending (C macro)c.delayed_work_pendinghNtauh1jhjhhhNhNubj)}(hhh](j)}(hdelayed_work_pendingh]j )}(hdelayed_work_pendingh]jI)}(hdelayed_work_pendingh]jO)}(hjLh]hdelayed_work_pending}(hjVhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjRubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjNhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMjubah}(h]h ]h"]h$]h&]jjjluh1jjmjnhjJhhhjihMjubah}(h]jEah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjihMjhjGhhubj{)}(hhh]h}(h]h ]h"]h$]h&]uh1jzhjGhhhjihMjubeh}(h]h ](jmacroeh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubh)}(h``delayed_work_pending (w)``h]j)}(hjh]hdelayed_work_pending (w)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMlhjhhubjJ)}(h limits the number of in-flight work items for each CPU. e.g. }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhW of 1 indicates that each CPU can be executing at most one work item for the workqueue.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhjubh)}(hFor unbound workqueues, **max_active** limits the number of in-flight work items for the whole system. e.g. **max_active** of 16 indicates that that there can be at most 16 work items executing for the workqueue in the whole system.h](hFor unbound workqueues, }(hjNhhhNhNubj)}(h**max_active**h]h max_active}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubhF limits the number of in-flight work items for the whole system. e.g. }(hjNhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubhn of 16 indicates that that there can be at most 16 work items executing for the workqueue in the whole system.}(hjNhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhjubh)}(hAs sharing the same active counter for an unbound workqueue across multiple NUMA nodes can be expensive, **max_active** is distributed to each NUMA node according to the proportion of the number of online CPUs and enforced independently.h](hiAs sharing the same active counter for an unbound workqueue across multiple NUMA nodes can be expensive, }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhv is distributed to each NUMA node according to the proportion of the number of online CPUs and enforced independently.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhjubh)}(hXDepending on online CPU distribution, a node may end up with per-node max_active which is significantly lower than **max_active**, which can lead to deadlocks if the per-node concurrency limit is lower than the maximum number of interdependent work items for the workqueue.h](hsDepending on online CPU distribution, a node may end up with per-node max_active which is significantly lower than }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, which can lead to deadlocks if the per-node concurrency limit is lower than the maximum number of interdependent work items for the workqueue.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhjubh)}(hX0To guarantee forward progress regardless of online CPU distribution, the concurrency limit on every node is guaranteed to be equal to or greater than min_active which is set to min(**max_active**, ``WQ_DFL_MIN_ACTIVE``). This means that the sum of per-node max_active's may be larger than **max_active**.h](hTo guarantee forward progress regardless of online CPU distribution, the concurrency limit on every node is guaranteed to be equal to or greater than min_active which is set to min(}(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h``WQ_DFL_MIN_ACTIVE``h]hWQ_DFL_MIN_ACTIVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhI). This means that the sum of per-node max_active’s may be larger than }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhjubh)}(haFor detailed information on ``WQ_*`` flags, please refer to Documentation/core-api/workqueue.rst.h](hFor detailed information on }(hjhhhNhNubj)}(h``WQ_*``h]hWQ_*}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh= flags, please refer to Documentation/core-api/workqueue.rst.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhjubh)}(h **Return**h]j)}(hj+h]hReturn}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhjubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hjAhhhNhNubj)}(h``NULL``h]hNULL}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubh on failure.}(hjAhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j(alloc_workqueue_lockdep_map (C function)c.alloc_workqueue_lockdep_maphNtauh1jhjhhhNhNubj)}(hhh](j)}(hstruct workqueue_struct * alloc_workqueue_lockdep_map (const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h]j )}(hstruct workqueue_struct *alloc_workqueue_lockdep_map(const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj~hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj~hhhjhMubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkalloc_workqueue_lockdep_mapsbc.alloc_workqueue_lockdep_mapasbuh1hhj~hhhjhMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj~hhhjhMubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~hhhjhMubjI)}(halloc_workqueue_lockdep_maph]jO)}(hjh]halloc_workqueue_lockdep_map}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj~hhhjhMubj)}(h[(const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h](j)}(hconst char *fmth](j&)}(hjh]hconst}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hj hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hcharh]hchar}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hj%hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hfmth]hfmt}(hj@hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hunsigned int flagsh](j)}(hunsignedh]hunsigned}(hjYhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUubj8)}(h h]h }(hjghhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjUubj)}(hinth]hint}(hjuhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjUubjO)}(hflagsh]hflags}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjUubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint max_activeh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(h max_activeh]h max_active}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct lockdep_map *lockdep_maph](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h lockdep_maph]h lockdep_map}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jc.alloc_workqueue_lockdep_mapasbuh1hhjubj8)}(h h]h }(hj hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj) hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(h lockdep_maph]h lockdep_map}(hj6 hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(h...h]j)}(hjh]h...}(hjO hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjK ubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhj~hhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjzhhhjhMubah}(h]juah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjwhhubj{)}(hhh]h)}(h2allocate a workqueue with user-defined lockdep_maph]h2allocate a workqueue with user-defined lockdep_map}(hjx hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhju hhubah}(h]h ]h"]h$]h&]uh1jzhjwhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj jj jjjuh1jhhhjhNhNubj)}(hX&**Parameters** ``const char *fmt`` printf format for the name of the workqueue ``unsigned int flags`` WQ_* flags ``int max_active`` max in-flight work items, 0 for default ``struct lockdep_map *lockdep_map`` user-defined lockdep_map ``...`` args for **fmt** **Description** Same as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation. **Return** Pointer to the allocated workqueue on success, ``NULL`` on failure.h](h)}(h**Parameters**h]j)}(hj h]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj ubjS)}(hhh](jX)}(h@``const char *fmt`` printf format for the name of the workqueue h](j^)}(h``const char *fmt``h]j)}(hj h]hconst char *fmt}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj ubjw)}(hhh]h)}(h+printf format for the name of the workqueueh]h+printf format for the name of the workqueue}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhj hMhj ubjX)}(h"``unsigned int flags`` WQ_* flags h](j^)}(h``unsigned int flags``h]j)}(hj h]hunsigned int flags}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj ubjw)}(hhh]h)}(h WQ_* flagsh]h WQ_* flags}(hj !hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!hMhj!ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhj!hMhj ubjX)}(h;``int max_active`` max in-flight work items, 0 for default h](j^)}(h``int max_active``h]j)}(hj+!h]hint max_active}(hj-!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)!ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj%!ubjw)}(hhh]h)}(h'max in-flight work items, 0 for defaulth]h'max in-flight work items, 0 for default}(hjD!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj@!hMhjA!ubah}(h]h ]h"]h$]h&]uh1jvhj%!ubeh}(h]h ]h"]h$]h&]uh1jWhj@!hMhj ubjX)}(h=``struct lockdep_map *lockdep_map`` user-defined lockdep_map h](j^)}(h#``struct lockdep_map *lockdep_map``h]j)}(hjd!h]hstruct lockdep_map *lockdep_map}(hjf!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjb!ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj^!ubjw)}(hhh]h)}(huser-defined lockdep_maph]huser-defined lockdep_map}(hj}!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjy!hMhjz!ubah}(h]h ]h"]h$]h&]uh1jvhj^!ubeh}(h]h ]h"]h$]h&]uh1jWhjy!hMhj ubjX)}(h``...`` args for **fmt** h](j^)}(h``...``h]j)}(hj!h]h...}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj!ubjw)}(hhh]h)}(hargs for **fmt**h](h args for }(hj!hhhNhNubj)}(h**fmt**h]hfmt}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubeh}(h]h ]h"]h$]h&]uh1hhj!hMhj!ubah}(h]h ]h"]h$]h&]uh1jvhj!ubeh}(h]h ]h"]h$]h&]uh1jWhj!hMhj ubeh}(h]h ]h"]h$]h&]uh1jRhj ubh)}(h**Description**h]j)}(hj!h]h Description}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj ubh)}(hSame as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation.h]hSame as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation.}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj ubh)}(h **Return**h]j)}(hj "h]hReturn}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj "ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM hj ubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hj#"hhhNhNubj)}(h``NULL``h]hNULL}(hj+"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#"ubh on failure.}(hj#"hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM hj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j-alloc_ordered_workqueue_lockdep_map (C macro)%c.alloc_ordered_workqueue_lockdep_maphNtauh1jhjhhhNhNubj)}(hhh](j)}(h#alloc_ordered_workqueue_lockdep_maph]j )}(h#alloc_ordered_workqueue_lockdep_maph]jI)}(h#alloc_ordered_workqueue_lockdep_maph]jO)}(hj^"h]h#alloc_ordered_workqueue_lockdep_map}(hjh"hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjd"ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj`"hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM"ubah}(h]h ]h"]h$]h&]jjjluh1jjmjnhj\"hhhj{"hM"ubah}(h]jW"ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj{"hM"hjY"hhubj{)}(hhh]h}(h]h ]h"]h$]h&]uh1jzhjY"hhhj{"hM"ubeh}(h]h ](jmacroeh"]h$]h&]jjjj"jj"jjjuh1jhhhjhNhNubh)}(hJ``alloc_ordered_workqueue_lockdep_map (fmt, flags, lockdep_map, args...)``h]j)}(hj"h]hFalloc_ordered_workqueue_lockdep_map (fmt, flags, lockdep_map, args...)}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM$hjhhubjJ)}(h+hMhj?+ubah}(h]h ]h"]h$]h&]uh1jvhj#+ubeh}(h]h ]h"]h$]h&]uh1jWhj>+hMhj*ubjX)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](j^)}(h``unsigned long delay``h]j)}(hjb+h]hunsigned long delay}(hjd+hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`+ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj\+ubjw)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hj{+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjw+hMhjx+ubah}(h]h ]h"]h$]h&]uh1jvhj\+ubeh}(h]h ]h"]h$]h&]uh1jWhjw+hMhj*ubeh}(h]h ]h"]h$]h&]uh1jRhj*ubh)}(h**Description**h]j)}(hj+h]h Description}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj+ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj*ubh)}(hEEquivalent to queue_delayed_work_on() but tries to use the local CPU.h]hEEquivalent to queue_delayed_work_on() but tries to use the local CPU.}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj*ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jmod_delayed_work (C function)c.mod_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hdbool mod_delayed_work (struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]j )}(hcbool mod_delayed_work(struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hj7&h]hbool}(hj+hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj+hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMubj8)}(h h]h }(hj+hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj+hhhj+hMubjI)}(hmod_delayed_workh]jO)}(hmod_delayed_workh]hmod_delayed_work}(hj,hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj+ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj+hhhj+hMubj)}(hN(struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj,hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj,ubj8)}(h h]h }(hj+,hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj,ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj<,hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj9,ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj>,modnameN classnameNjojr)}ju]jx)}jkj,sbc.mod_delayed_workasbuh1hhj,ubj8)}(h h]h }(hj\,hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj,ubj)}(hjah]h*}(hjj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj,ubjO)}(hwqh]hwq}(hjw,hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj,ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj,ubj)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hj,hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj,ubj8)}(h h]h }(hj,hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj,ubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hj,hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj,ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj,modnameN classnameNjojr)}ju]jX,c.mod_delayed_workasbuh1hhj,ubj8)}(h h]h }(hj,hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj,ubj)}(hjah]h*}(hj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj,ubjO)}(hdworkh]hdwork}(hj,hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj,ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj,ubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj,ubj8)}(h h]h }(hj-hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj,ubj)}(hlongh]hlong}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj,ubj8)}(h h]h }(hj*-hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj,ubjO)}(hdelayh]hdelay}(hj8-hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj,ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj,ubeh}(h]h ]h"]h$]h&]jjuh1jhj+hhhj+hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj+hhhj+hMubah}(h]j+ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj+hMhj+hhubj{)}(hhh]h)}(h'modify delay of or queue a delayed workh]h'modify delay of or queue a delayed work}(hjb-hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj_-hhubah}(h]h ]h"]h$]h&]uh1jzhj+hhhj+hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjz-jjz-jjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** mod_delayed_work_on() on local CPU.h](h)}(h**Parameters**h]j)}(hj-h]h Parameters}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj~-ubjS)}(hhh](jX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj-h]hstruct workqueue_struct *wq}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj-ubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj-hMhj-ubah}(h]h ]h"]h$]h&]uh1jvhj-ubeh}(h]h ]h"]h$]h&]uh1jWhj-hMhj-ubjX)}(h-``struct delayed_work *dwork`` work to queue h](j^)}(h``struct delayed_work *dwork``h]j)}(hj-h]hstruct delayed_work *dwork}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj-ubjw)}(hhh]h)}(h work to queueh]h work to queue}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj-hMhj-ubah}(h]h ]h"]h$]h&]uh1jvhj-ubeh}(h]h ]h"]h$]h&]uh1jWhj-hMhj-ubjX)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](j^)}(h``unsigned long delay``h]j)}(hj.h]hunsigned long delay}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj.ubjw)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hj..hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj*.hMhj+.ubah}(h]h ]h"]h$]h&]uh1jvhj.ubeh}(h]h ]h"]h$]h&]uh1jWhj*.hMhj-ubeh}(h]h ]h"]h$]h&]uh1jRhj~-ubh)}(h**Description**h]j)}(hjP.h]h Description}(hjR.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjN.ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj~-ubh)}(h#mod_delayed_work_on() on local CPU.h]h#mod_delayed_work_on() on local CPU.}(hjf.hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj~-ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jschedule_work_on (C function)c.schedule_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(h9bool schedule_work_on (int cpu, struct work_struct *work)h]j )}(h8bool schedule_work_on(int cpu, struct work_struct *work)h](j)}(hj7&h]hbool}(hj.hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMubj8)}(h h]h }(hj.hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj.hhhj.hMubjI)}(hschedule_work_onh]jO)}(hschedule_work_onh]hschedule_work_on}(hj.hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj.ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj.hhhj.hMubj)}(h#(int cpu, struct work_struct *work)h](j)}(hint cpuh](j)}(hinth]hint}(hj.hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubj8)}(h h]h }(hj.hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj.ubjO)}(hcpuh]hcpu}(hj.hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj.ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj.ubj)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hj/hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj/ubj8)}(h h]h }(hj/hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj/ubh)}(hhh]jO)}(h work_structh]h work_struct}(hj$/hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj!/ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj&/modnameN classnameNjojr)}ju]jx)}jkj.sbc.schedule_work_onasbuh1hhj/ubj8)}(h h]h }(hjD/hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj/ubj)}(hjah]h*}(hjR/hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/ubjO)}(hworkh]hwork}(hj_/hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj/ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj.ubeh}(h]h ]h"]h$]h&]jjuh1jhj.hhhj.hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj.hhhj.hMubah}(h]j.ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj.hMhj.hhubj{)}(hhh]h)}(hput work task on a specific cpuh]hput work task on a specific cpu}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj/hhubah}(h]h ]h"]h$]h&]uh1jzhj.hhhj.hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj/jj/jjjuh1jhhhjhNhNubj)}(h**Parameters** ``int cpu`` cpu to put the work task on ``struct work_struct *work`` job to be done **Description** This puts a job on a specific cpuh](h)}(h**Parameters**h]j)}(hj/h]h Parameters}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj/ubjS)}(hhh](jX)}(h(``int cpu`` cpu to put the work task on h](j^)}(h ``int cpu``h]j)}(hj/h]hint cpu}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj/ubjw)}(hhh]h)}(hcpu to put the work task onh]hcpu to put the work task on}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/hMhj/ubah}(h]h ]h"]h$]h&]uh1jvhj/ubeh}(h]h ]h"]h$]h&]uh1jWhj/hMhj/ubjX)}(h,``struct work_struct *work`` job to be done h](j^)}(h``struct work_struct *work``h]j)}(hj0h]hstruct work_struct *work}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj/ubjw)}(hhh]h)}(hjob to be doneh]hjob to be done}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0hMhj0ubah}(h]h ]h"]h$]h&]uh1jvhj/ubeh}(h]h ]h"]h$]h&]uh1jWhj0hMhj/ubeh}(h]h ]h"]h$]h&]uh1jRhj/ubh)}(h**Description**h]j)}(hj>0h]h Description}(hj@0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<0ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj/ubh)}(h!This puts a job on a specific cpuh]h!This puts a job on a specific cpu}(hjT0hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj/ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jschedule_work (C function)c.schedule_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h-bool schedule_work (struct work_struct *work)h]j )}(h,bool schedule_work(struct work_struct *work)h](j)}(hj7&h]hbool}(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMubj8)}(h h]h }(hj0hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj0hhhj0hMubjI)}(h schedule_workh]jO)}(h schedule_workh]h schedule_work}(hj0hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj0ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj0hhhj0hMubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hj0hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj0ubj8)}(h h]h }(hj0hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj0ubh)}(hhh]jO)}(h work_structh]h work_struct}(hj0hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj0ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj0modnameN classnameNjojr)}ju]jx)}jkj0sbc.schedule_workasbuh1hhj0ubj8)}(h h]h }(hj0hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj0ubj)}(hjah]h*}(hj 1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0ubjO)}(hworkh]hwork}(hj1hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj0ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj0ubah}(h]h ]h"]h$]h&]jjuh1jhj0hhhj0hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj{0hhhj0hMubah}(h]jv0ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj0hMhjx0hhubj{)}(hhh]h)}(h!put work task in global workqueueh]h!put work task in global workqueue}(hjB1hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj?1hhubah}(h]h ]h"]h$]h&]uh1jzhjx0hhhj0hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjZ1jjZ1jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct work_struct *work`` job to be done **Description** Returns ``false`` if **work** was already on the kernel-global workqueue and ``true`` otherwise. This puts a job in the kernel-global workqueue if it was not already queued and leaves it in the same position on the kernel-global workqueue otherwise. Shares the same memory-ordering properties of queue_work(), cf. the DocBook header of queue_work().h](h)}(h**Parameters**h]j)}(hjd1h]h Parameters}(hjf1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjb1ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj^1ubjS)}(hhh]jX)}(h,``struct work_struct *work`` job to be done h](j^)}(h``struct work_struct *work``h]j)}(hj1h]hstruct work_struct *work}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj}1ubjw)}(hhh]h)}(hjob to be doneh]hjob to be done}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj1hMhj1ubah}(h]h ]h"]h$]h&]uh1jvhj}1ubeh}(h]h ]h"]h$]h&]uh1jWhj1hMhjz1ubah}(h]h ]h"]h$]h&]uh1jRhj^1ubh)}(h**Description**h]j)}(hj1h]h Description}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj^1ubh)}(h`Returns ``false`` if **work** was already on the kernel-global workqueue and ``true`` otherwise.h](hReturns }(hj1hhhNhNubj)}(h ``false``h]hfalse}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubh if }(hj1hhhNhNubj)}(h**work**h]hwork}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubh0 was already on the kernel-global workqueue and }(hj1hhhNhNubj)}(h``true``h]htrue}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubh otherwise.}(hj1hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj^1ubh)}(hThis puts a job in the kernel-global workqueue if it was not already queued and leaves it in the same position on the kernel-global workqueue otherwise.h]hThis puts a job in the kernel-global workqueue if it was not already queued and leaves it in the same position on the kernel-global workqueue otherwise.}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj^1ubh)}(hcShares the same memory-ordering properties of queue_work(), cf. the DocBook header of queue_work().h]hcShares the same memory-ordering properties of queue_work(), cf. the DocBook header of queue_work().}(hj(2hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj^1ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"enable_and_queue_work (C function)c.enable_and_queue_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hRbool enable_and_queue_work (struct workqueue_struct *wq, struct work_struct *work)h]j )}(hQbool enable_and_queue_work(struct workqueue_struct *wq, struct work_struct *work)h](j)}(hj7&h]hbool}(hjW2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjS2hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMubj8)}(h h]h }(hje2hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjS2hhhjd2hMubjI)}(henable_and_queue_workh]jO)}(henable_and_queue_workh]henable_and_queue_work}(hjw2hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjs2ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjS2hhhjd2hMubj)}(h7(struct workqueue_struct *wq, struct work_struct *work)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj2hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj2ubj8)}(h h]h }(hj2hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj2ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj2hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj2ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj2modnameN classnameNjojr)}ju]jx)}jkjy2sbc.enable_and_queue_workasbuh1hhj2ubj8)}(h h]h }(hj2hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj2ubj)}(hjah]h*}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubjO)}(hwqh]hwq}(hj2hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj2ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj2ubj)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hj3hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj3ubj8)}(h h]h }(hj3hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj3ubh)}(hhh]jO)}(h work_structh]h work_struct}(hj#3hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj 3ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj%3modnameN classnameNjojr)}ju]j2c.enable_and_queue_workasbuh1hhj3ubj8)}(h h]h }(hjA3hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj3ubj)}(hjah]h*}(hjO3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3ubjO)}(hworkh]hwork}(hj\3hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj3ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj2ubeh}(h]h ]h"]h$]h&]jjuh1jhjS2hhhjd2hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjO2hhhjd2hMubah}(h]jJ2ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjd2hMhjL2hhubj{)}(hhh]h)}(h4Enable and queue a work item on a specific workqueueh]h4Enable and queue a work item on a specific workqueue}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj3hhubah}(h]h ]h"]h$]h&]uh1jzhjL2hhhjd2hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj3jj3jjjuh1jhhhjhNhNubj)}(hX **Parameters** ``struct workqueue_struct *wq`` The target workqueue ``struct work_struct *work`` The work item to be enabled and queued **Description** This function combines the operations of enable_work() and queue_work(), providing a convenient way to enable and queue a work item in a single call. It invokes enable_work() on **work** and then queues it if the disable depth reached 0. Returns ``true`` if the disable depth reached 0 and **work** is queued, and ``false`` otherwise. Note that **work** is always queued when disable depth reaches zero. If the desired behavior is queueing only if certain events took place while **work** is disabled, the user should implement the necessary state tracking and perform explicit conditional queueing after enable_work().h](h)}(h**Parameters**h]j)}(hj3h]h Parameters}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj3ubjS)}(hhh](jX)}(h5``struct workqueue_struct *wq`` The target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj3h]hstruct workqueue_struct *wq}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj3ubjw)}(hhh]h)}(hThe target workqueueh]hThe target workqueue}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj3hMhj3ubah}(h]h ]h"]h$]h&]uh1jvhj3ubeh}(h]h ]h"]h$]h&]uh1jWhj3hMhj3ubjX)}(hD``struct work_struct *work`` The work item to be enabled and queued h](j^)}(h``struct work_struct *work``h]j)}(hj4h]hstruct work_struct *work}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj3ubjw)}(hhh]h)}(h&The work item to be enabled and queuedh]h&The work item to be enabled and queued}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hMhj4ubah}(h]h ]h"]h$]h&]uh1jvhj3ubeh}(h]h ]h"]h$]h&]uh1jWhj4hMhj3ubeh}(h]h ]h"]h$]h&]uh1jRhj3ubh)}(h**Description**h]j)}(hj;4h]h Description}(hj=4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj94ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj3ubh)}(hXNThis function combines the operations of enable_work() and queue_work(), providing a convenient way to enable and queue a work item in a single call. It invokes enable_work() on **work** and then queues it if the disable depth reached 0. Returns ``true`` if the disable depth reached 0 and **work** is queued, and ``false`` otherwise.h](hThis function combines the operations of enable_work() and queue_work(), providing a convenient way to enable and queue a work item in a single call. It invokes enable_work() on }(hjQ4hhhNhNubj)}(h**work**h]hwork}(hjY4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQ4ubh< and then queues it if the disable depth reached 0. Returns }(hjQ4hhhNhNubj)}(h``true``h]htrue}(hjk4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQ4ubh$ if the disable depth reached 0 and }(hjQ4hhhNhNubj)}(h**work**h]hwork}(hj}4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQ4ubh is queued, and }(hjQ4hhhNhNubj)}(h ``false``h]hfalse}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQ4ubh otherwise.}(hjQ4hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj3ubh)}(hXNote that **work** is always queued when disable depth reaches zero. If the desired behavior is queueing only if certain events took place while **work** is disabled, the user should implement the necessary state tracking and perform explicit conditional queueing after enable_work().h](h Note that }(hj4hhhNhNubj)}(h**work**h]hwork}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubh is always queued when disable depth reaches zero. If the desired behavior is queueing only if certain events took place while }(hj4hhhNhNubj)}(h**work**h]hwork}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubh is disabled, the user should implement the necessary state tracking and perform explicit conditional queueing after enable_work().}(hj4hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj3ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%schedule_delayed_work_on (C function)c.schedule_delayed_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(hXbool schedule_delayed_work_on (int cpu, struct delayed_work *dwork, unsigned long delay)h]j )}(hWbool schedule_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long delay)h](j)}(hj7&h]hbool}(hj4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj4hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM ubj8)}(h h]h }(hj 5hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj4hhhj5hM ubjI)}(hschedule_delayed_work_onh]jO)}(hschedule_delayed_work_onh]hschedule_delayed_work_on}(hj5hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj5ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj4hhhj5hM ubj)}(h:(int cpu, struct delayed_work *dwork, unsigned long delay)h](j)}(hint cpuh](j)}(hinth]hint}(hj75hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj35ubj8)}(h h]h }(hjE5hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj35ubjO)}(hcpuh]hcpu}(hjS5hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj35ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj/5ubj)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjl5hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjh5ubj8)}(h h]h }(hjy5hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjh5ubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hj5hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj5ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj5modnameN classnameNjojr)}ju]jx)}jkj5sbc.schedule_delayed_work_onasbuh1hhjh5ubj8)}(h h]h }(hj5hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjh5ubj)}(hjah]h*}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjh5ubjO)}(hdworkh]hdwork}(hj5hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjh5ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj/5ubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubj8)}(h h]h }(hj5hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj5ubj)}(hlongh]hlong}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubj8)}(h h]h }(hj6hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj5ubjO)}(hdelayh]hdelay}(hj6hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj5ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj/5ubeh}(h]h ]h"]h$]h&]jjuh1jhj4hhhj5hM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj4hhhj5hM ubah}(h]j4ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj5hM hj4hhubj{)}(hhh]h)}(h1queue work in global workqueue on CPU after delayh]h1queue work in global workqueue on CPU after delay}(hj@6hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj=6hhubah}(h]h ]h"]h$]h&]uh1jzhj4hhhj5hM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjX6jjX6jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``int cpu`` cpu to use ``struct delayed_work *dwork`` job to be done ``unsigned long delay`` number of jiffies to wait **Description** After waiting for a given time this puts a job in the kernel-global workqueue on the specified CPU.h](h)}(h**Parameters**h]j)}(hjb6h]h Parameters}(hjd6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`6ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj\6ubjS)}(hhh](jX)}(h``int cpu`` cpu to use h](j^)}(h ``int cpu``h]j)}(hj6h]hint cpu}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj{6ubjw)}(hhh]h)}(h cpu to useh]h cpu to use}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj6hMhj6ubah}(h]h ]h"]h$]h&]uh1jvhj{6ubeh}(h]h ]h"]h$]h&]uh1jWhj6hMhjx6ubjX)}(h.``struct delayed_work *dwork`` job to be done h](j^)}(h``struct delayed_work *dwork``h]j)}(hj6h]hstruct delayed_work *dwork}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj6ubjw)}(hhh]h)}(hjob to be doneh]hjob to be done}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj6hMhj6ubah}(h]h ]h"]h$]h&]uh1jvhj6ubeh}(h]h ]h"]h$]h&]uh1jWhj6hMhjx6ubjX)}(h2``unsigned long delay`` number of jiffies to wait h](j^)}(h``unsigned long delay``h]j)}(hj6h]hunsigned long delay}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj6ubjw)}(hhh]h)}(hnumber of jiffies to waith]hnumber of jiffies to wait}(hj 7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj7hMhj 7ubah}(h]h ]h"]h$]h&]uh1jvhj6ubeh}(h]h ]h"]h$]h&]uh1jWhj7hMhjx6ubeh}(h]h ]h"]h$]h&]uh1jRhj\6ubh)}(h**Description**h]j)}(hj.7h]h Description}(hj07hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,7ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj\6ubh)}(hcAfter waiting for a given time this puts a job in the kernel-global workqueue on the specified CPU.h]hcAfter waiting for a given time this puts a job in the kernel-global workqueue on the specified CPU.}(hjD7hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhMhj\6ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"schedule_delayed_work (C function)c.schedule_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hLbool schedule_delayed_work (struct delayed_work *dwork, unsigned long delay)h]j )}(hKbool schedule_delayed_work(struct delayed_work *dwork, unsigned long delay)h](j)}(hj7&h]hbool}(hjs7hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjo7hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM.ubj8)}(h h]h }(hj7hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjo7hhhj7hM.ubjI)}(hschedule_delayed_workh]jO)}(hschedule_delayed_workh]hschedule_delayed_work}(hj7hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj7ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjo7hhhj7hM.ubj)}(h1(struct delayed_work *dwork, unsigned long delay)h](j)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hj7hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj7ubj8)}(h h]h }(hj7hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj7ubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hj7hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj7ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj7modnameN classnameNjojr)}ju]jx)}jkj7sbc.schedule_delayed_workasbuh1hhj7ubj8)}(h h]h }(hj7hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj7ubj)}(hjah]h*}(hj7hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7ubjO)}(hdworkh]hdwork}(hj8hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj7ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj7ubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hj!8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubj8)}(h h]h }(hj/8hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj8ubj)}(hlongh]hlong}(hj=8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubj8)}(h h]h }(hjK8hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj8ubjO)}(hdelayh]hdelay}(hjY8hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj8ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj7ubeh}(h]h ]h"]h$]h&]jjuh1jhjo7hhhj7hM.ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjk7hhhj7hM.ubah}(h]jf7ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj7hM.hjh7hhubj{)}(hhh]h)}(h-put work task in global workqueue after delayh]h-put work task in global workqueue after delay}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM'hj8hhubah}(h]h ]h"]h$]h&]uh1jzhjh7hhhj7hM.ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj8jj8jjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` job to be done ``unsigned long delay`` number of jiffies to wait or 0 for immediate execution **Description** After waiting for a given time this puts a job in the kernel-global workqueue.h](h)}(h**Parameters**h]j)}(hj8h]h Parameters}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM+hj8ubjS)}(hhh](jX)}(h.``struct delayed_work *dwork`` job to be done h](j^)}(h``struct delayed_work *dwork``h]j)}(hj8h]hstruct delayed_work *dwork}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM(hj8ubjw)}(hhh]h)}(hjob to be doneh]hjob to be done}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj8hM(hj8ubah}(h]h ]h"]h$]h&]uh1jvhj8ubeh}(h]h ]h"]h$]h&]uh1jWhj8hM(hj8ubjX)}(hO``unsigned long delay`` number of jiffies to wait or 0 for immediate execution h](j^)}(h``unsigned long delay``h]j)}(hj8h]hunsigned long delay}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM)hj8ubjw)}(hhh]h)}(h6number of jiffies to wait or 0 for immediate executionh]h6number of jiffies to wait or 0 for immediate execution}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj9hM)hj9ubah}(h]h ]h"]h$]h&]uh1jvhj8ubeh}(h]h ]h"]h$]h&]uh1jWhj9hM)hj8ubeh}(h]h ]h"]h$]h&]uh1jRhj8ubh)}(h**Description**h]j)}(hj89h]h Description}(hj:9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj69ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM+hj8ubh)}(hNAfter waiting for a given time this puts a job in the kernel-global workqueue.h]hNAfter waiting for a given time this puts a job in the kernel-global workqueue.}(hjN9hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:781: ./include/linux/workqueue.hhM+hj8ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jfor_each_pool (C macro)c.for_each_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h for_each_poolh]j )}(h for_each_poolh]jI)}(h for_each_poolh]jO)}(hjw9h]h for_each_pool}(hj9hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj}9ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjy9hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6ubah}(h]h ]h"]h$]h&]jjjluh1jjmjnhju9hhhj9hM6ubah}(h]jp9ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj9hM6hjr9hhubj{)}(hhh]h}(h]h ]h"]h$]h&]uh1jzhjr9hhhj9hM6ubeh}(h]h ](jmacroeh"]h$]h&]jjjj9jj9jjjuh1jhhhjhNhNubh)}(h``for_each_pool (pool, pi)``h]j)}(hj9h]hfor_each_pool (pool, pi)}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM8hjhhubjJ)}(h/iterate through all worker_pools in the system h]h)}(h.iterate through all worker_pools in the systemh]h.iterate through all worker_pools in the system}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM+hj9ubah}(h]h ]h"]h$]h&]uh1jIhj9hM+hjhhubj)}(hXz**Parameters** ``pool`` iteration cursor ``pi`` integer used for iteration **Description** This must be called either with wq_pool_mutex held or RCU read locked. If the pool needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pool stays online. The if/else clause exists only for the lockdep assertion and can be ignored.h](h)}(h**Parameters**h]j)}(hj9h]h Parameters}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM/hj9ubjS)}(hhh](jX)}(h``pool`` iteration cursor h](j^)}(h``pool``h]j)}(hj:h]hpool}(hj :hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM,hj:ubjw)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hj :hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:hM,hj:ubah}(h]h ]h"]h$]h&]uh1jvhj:ubeh}(h]h ]h"]h$]h&]uh1jWhj:hM,hj9ubjX)}(h"``pi`` integer used for iteration h](j^)}(h``pi``h]j)}(hj@:h]hpi}(hjB:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>:ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM-hj::ubjw)}(hhh]h)}(hinteger used for iterationh]hinteger used for iteration}(hjY:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjU:hM-hjV:ubah}(h]h ]h"]h$]h&]uh1jvhj::ubeh}(h]h ]h"]h$]h&]uh1jWhjU:hM-hj9ubeh}(h]h ]h"]h$]h&]uh1jRhj9ubh)}(h**Description**h]j)}(hj{:h]h Description}(hj}:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjy:ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM/hj9ubh)}(hThis must be called either with wq_pool_mutex held or RCU read locked. If the pool needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pool stays online.h]hThis must be called either with wq_pool_mutex held or RCU read locked. If the pool needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pool stays online.}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM/hj9ubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM3hj9ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jfor_each_pool_worker (C macro)c.for_each_pool_workerhNtauh1jhjhhhNhNubj)}(hhh](j)}(hfor_each_pool_workerh]j )}(hfor_each_pool_workerh]jI)}(hfor_each_pool_workerh]jO)}(hj:h]hfor_each_pool_worker}(hj:hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj:ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj:hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMEubah}(h]h ]h"]h$]h&]jjjluh1jjmjnhj:hhhj:hMEubah}(h]j:ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj:hMEhj:hhubj{)}(hhh]h}(h]h ]h"]h$]h&]uh1jzhj:hhhj:hMEubeh}(h]h ](jmacroeh"]h$]h&]jjjj:jj:jjjuh1jhhhjhNhNubh)}(h'``for_each_pool_worker (worker, pool)``h]j)}(hj;h]h#for_each_pool_worker (worker, pool)}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMGhjhhubjJ)}(h-iterate through all workers of a worker_pool h]h)}(h,iterate through all workers of a worker_poolh]h,iterate through all workers of a worker_pool}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM<hj;ubah}(h]h ]h"]h$]h&]uh1jIhj-;hM<hjhhubj)}(h**Parameters** ``worker`` iteration cursor ``pool`` worker_pool to iterate workers of **Description** This must be called with wq_pool_attach_mutex. The if/else clause exists only for the lockdep assertion and can be ignored.h](h)}(h**Parameters**h]j)}(hj:;h]h Parameters}(hj<;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8;ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM@hj4;ubjS)}(hhh](jX)}(h``worker`` iteration cursor h](j^)}(h ``worker``h]j)}(hjY;h]hworker}(hj[;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjW;ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM=hjS;ubjw)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hjr;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjn;hM=hjo;ubah}(h]h ]h"]h$]h&]uh1jvhjS;ubeh}(h]h ]h"]h$]h&]uh1jWhjn;hM=hjP;ubjX)}(h+``pool`` worker_pool to iterate workers of h](j^)}(h``pool``h]j)}(hj;h]hpool}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM>hj;ubjw)}(hhh]h)}(h!worker_pool to iterate workers ofh]h!worker_pool to iterate workers of}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj;hM>hj;ubah}(h]h ]h"]h$]h&]uh1jvhj;ubeh}(h]h ]h"]h$]h&]uh1jWhj;hM>hjP;ubeh}(h]h ]h"]h$]h&]uh1jRhj4;ubh)}(h**Description**h]j)}(hj;h]h Description}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM@hj4;ubh)}(h.This must be called with wq_pool_attach_mutex.h]h.This must be called with wq_pool_attach_mutex.}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM@hj4;ubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMBhj4;ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jfor_each_pwq (C macro)c.for_each_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h for_each_pwqh]j )}(h for_each_pwqh]jI)}(h for_each_pwqh]jO)}(hj<h]h for_each_pwq}(hj%<hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj!<ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj<hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMVubah}(h]h ]h"]h$]h&]jjjluh1jjmjnhj<hhhj8<hMVubah}(h]j<ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj8<hMVhj<hhubj{)}(hhh]h}(h]h ]h"]h$]h&]uh1jzhj<hhhj8<hMVubeh}(h]h ](jmacroeh"]h$]h&]jjjjQ<jjQ<jjjuh1jhhhjhNhNubh)}(h``for_each_pwq (pwq, wq)``h]j)}(hjW<h]hfor_each_pwq (pwq, wq)}(hjY<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjU<ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMXhjhhubjJ)}(h?iterate through all pool_workqueues of the specified workqueue h]h)}(h>iterate through all pool_workqueues of the specified workqueueh]h>iterate through all pool_workqueues of the specified workqueue}(hjq<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMKhjm<ubah}(h]h ]h"]h$]h&]uh1jIhj<hMKhjhhubj)}(hXl**Parameters** ``pwq`` iteration cursor ``wq`` the target workqueue **Description** This must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online. The if/else clause exists only for the lockdep assertion and can be ignored.h](h)}(h**Parameters**h]j)}(hj<h]h Parameters}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMOhj<ubjS)}(hhh](jX)}(h``pwq`` iteration cursor h](j^)}(h``pwq``h]j)}(hj<h]hpwq}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMLhj<ubjw)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj<hMLhj<ubah}(h]h ]h"]h$]h&]uh1jvhj<ubeh}(h]h ]h"]h$]h&]uh1jWhj<hMLhj<ubjX)}(h``wq`` the target workqueue h](j^)}(h``wq``h]j)}(hj<h]hwq}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMMhj<ubjw)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj<hMMhj<ubah}(h]h ]h"]h$]h&]uh1jvhj<ubeh}(h]h ]h"]h$]h&]uh1jWhj<hMMhj<ubeh}(h]h ]h"]h$]h&]uh1jRhj<ubh)}(h**Description**h]j)}(hj=h]h Description}(hj!=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMOhj<ubh)}(hThis must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online.h]hThis must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online.}(hj5=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMOhj<ubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hjD=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMShj<ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"worker_pool_assign_id (C function)c.worker_pool_assign_idhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4int worker_pool_assign_id (struct worker_pool *pool)h]j )}(h3int worker_pool_assign_id(struct worker_pool *pool)h](j)}(hinth]hint}(hjs=hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjo=hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj=hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjo=hhhj=hMubjI)}(hworker_pool_assign_idh]jO)}(hworker_pool_assign_idh]hworker_pool_assign_id}(hj=hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj=ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjo=hhhj=hMubj)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hj=hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj=ubj8)}(h h]h }(hj=hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj=ubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hj=hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj=ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj=modnameN classnameNjojr)}ju]jx)}jkj=sbc.worker_pool_assign_idasbuh1hhj=ubj8)}(h h]h }(hj=hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj=ubj)}(hjah]h*}(hj=hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=ubjO)}(hpoolh]hpool}(hj >hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj=ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj=ubah}(h]h ]h"]h$]h&]jjuh1jhjo=hhhj=hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjk=hhhj=hMubah}(h]jf=ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj=hMhjh=hhubj{)}(hhh]h)}(h%allocate ID and assign it to **pool**h](hallocate ID and assign it to }(hj3>hhhNhNubj)}(h**pool**h]hpool}(hj;>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3>ubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj0>hhubah}(h]h ]h"]h$]h&]uh1jzhjh=hhhj=hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjY>jjY>jjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct worker_pool *pool`` the pool pointer of interest **Description** Returns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.h](h)}(h**Parameters**h]j)}(hjc>h]h Parameters}(hje>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhja>ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj]>ubjS)}(hhh]jX)}(h:``struct worker_pool *pool`` the pool pointer of interest h](j^)}(h``struct worker_pool *pool``h]j)}(hj>h]hstruct worker_pool *pool}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj|>ubjw)}(hhh]h)}(hthe pool pointer of interesth]hthe pool pointer of interest}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj>hMhj>ubah}(h]h ]h"]h$]h&]uh1jvhj|>ubeh}(h]h ]h"]h$]h&]uh1jWhj>hMhjy>ubah}(h]h ]h"]h$]h&]uh1jRhj]>ubh)}(h**Description**h]j)}(hj>h]h Description}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj]>ubh)}(hfReturns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.h]hfReturns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj]>ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&unbound_effective_cpumask (C function)c.unbound_effective_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(hHstruct cpumask * unbound_effective_cpumask (struct workqueue_struct *wq)h]j )}(hFstruct cpumask *unbound_effective_cpumask(struct workqueue_struct *wq)h](j&)}(hj)h]hstruct}(hj?hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj>hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj?hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj>hhhj?hMubh)}(hhh]jO)}(hcpumaskh]hcpumask}(hj!?hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj?ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj#?modnameN classnameNjojr)}ju]jx)}jkunbound_effective_cpumasksbc.unbound_effective_cpumaskasbuh1hhj>hhhj?hMubj8)}(h h]h }(hjB?hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj>hhhj?hMubj)}(hjah]h*}(hjP?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>hhhj?hMubjI)}(hunbound_effective_cpumaskh]jO)}(hj??h]hunbound_effective_cpumask}(hja?hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj]?ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj>hhhj?hMubj)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj|?hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjx?ubj8)}(h h]h }(hj?hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjx?ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj?hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj?ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj?modnameN classnameNjojr)}ju]j=?c.unbound_effective_cpumaskasbuh1hhjx?ubj8)}(h h]h }(hj?hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjx?ubj)}(hjah]h*}(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjx?ubjO)}(hwqh]hwq}(hj?hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjx?ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjt?ubah}(h]h ]h"]h$]h&]jjuh1jhj>hhhj?hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj>hhhj?hMubah}(h]j>ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj?hMhj>hhubj{)}(hhh]h)}(h)effective cpumask of an unbound workqueueh]h)effective cpumask of an unbound workqueue}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj?hhubah}(h]h ]h"]h$]h&]uh1jzhj>hhhj?hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj@jj@jjjuh1jhhhjhNhNubj)}(hX@**Parameters** ``struct workqueue_struct *wq`` workqueue of interest **Description** **wq->unbound_attrs->cpumask** contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.h](h)}(h**Parameters**h]j)}(hj@h]h Parameters}(hj!@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj@ubjS)}(hhh]jX)}(h6``struct workqueue_struct *wq`` workqueue of interest h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj>@h]hstruct workqueue_struct *wq}(hj@@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<@ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj8@ubjw)}(hhh]h)}(hworkqueue of interesth]hworkqueue of interest}(hjW@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjS@hMhjT@ubah}(h]h ]h"]h$]h&]uh1jvhj8@ubeh}(h]h ]h"]h$]h&]uh1jWhjS@hMhj5@ubah}(h]h ]h"]h$]h&]uh1jRhj@ubh)}(h**Description**h]j)}(hjy@h]h Description}(hj{@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjw@ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj@ubh)}(h**wq->unbound_attrs->cpumask** contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.h](j)}(h**wq->unbound_attrs->cpumask**h]hwq->unbound_attrs->cpumask}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubh contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.}(hj@hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj@ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jget_work_pool (C function)c.get_work_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h=struct worker_pool * get_work_pool (struct work_struct *work)h]j )}(h;struct worker_pool *get_work_pool(struct work_struct *work)h](j&)}(hj)h]hstruct}(hj@hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj@hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMoubj8)}(h h]h }(hj@hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj@hhhj@hMoubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hj@hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj@ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj@modnameN classnameNjojr)}ju]jx)}jk get_work_poolsbc.get_work_poolasbuh1hhj@hhhj@hMoubj8)}(h h]h }(hj AhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj@hhhj@hMoubj)}(hjah]h*}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@hhhj@hMoubjI)}(h get_work_poolh]jO)}(hj Ah]h get_work_pool}(hj+AhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj'Aubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj@hhhj@hMoubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjFAhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjBAubj8)}(h h]h }(hjSAhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjBAubh)}(hhh]jO)}(h work_structh]h work_struct}(hjdAhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjaAubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjfAmodnameN classnameNjojr)}ju]jAc.get_work_poolasbuh1hhjBAubj8)}(h h]h }(hjAhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjBAubj)}(hjah]h*}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBAubjO)}(hworkh]hwork}(hjAhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjBAubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj>Aubah}(h]h ]h"]h$]h&]jjuh1jhj@hhhj@hMoubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj@hhhj@hMoubah}(h]j@ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj@hMohj@hhubj{)}(hhh]h)}(h7return the worker_pool a given work was associated withh]h7return the worker_pool a given work was associated with}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMahjAhhubah}(h]h ]h"]h$]h&]uh1jzhj@hhhj@hMoubeh}(h]h ](jfunctioneh"]h$]h&]jjjjAjjAjjjuh1jhhhjhNhNubj)}(hXi**Parameters** ``struct work_struct *work`` the work item of interest **Description** Pools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region. All fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online. **Return** The worker_pool **work** was last associated with. ``NULL`` if none.h](h)}(h**Parameters**h]j)}(hjAh]h Parameters}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMehjAubjS)}(hhh]jX)}(h7``struct work_struct *work`` the work item of interest h](j^)}(h``struct work_struct *work``h]j)}(hjBh]hstruct work_struct *work}(hj BhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMbhjBubjw)}(hhh]h)}(hthe work item of interesth]hthe work item of interest}(hj!BhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjBhMbhjBubah}(h]h ]h"]h$]h&]uh1jvhjBubeh}(h]h ]h"]h$]h&]uh1jWhjBhMbhjAubah}(h]h ]h"]h$]h&]uh1jRhjAubh)}(h**Description**h]j)}(hjCBh]h Description}(hjEBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjABubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMdhjAubh)}(hPools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region.h]hPools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region.}(hjYBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMdhjAubh)}(hAll fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online.h]hAll fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online.}(hjhBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhhjAubh)}(h **Return**h]j)}(hjyBh]hReturn}(hj{BhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwBubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMmhjAubh)}(hEThe worker_pool **work** was last associated with. ``NULL`` if none.h](hThe worker_pool }(hjBhhhNhNubj)}(h**work**h]hwork}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubh was last associated with. }(hjBhhhNhNubj)}(h``NULL``h]hNULL}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubh if none.}(hjBhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMmhjAubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_set_flags (C function)c.worker_set_flagshNtauh1jhjhhhNhNubj)}(hhh](j)}(hAvoid worker_set_flags (struct worker *worker, unsigned int flags)h]j )}(h@void worker_set_flags(struct worker *worker, unsigned int flags)h](j)}(hvoidh]hvoid}(hjBhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjBhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjBhhhjBhMubjI)}(hworker_set_flagsh]jO)}(hworker_set_flagsh]hworker_set_flags}(hjChhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjBubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjBhhhjBhMubj)}(h+(struct worker *worker, unsigned int flags)h](j)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hjChhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjCubj8)}(h h]h }(hj,ChhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjCubh)}(hhh]jO)}(hworkerh]hworker}(hj=ChhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj:Cubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj?CmodnameN classnameNjojr)}ju]jx)}jkjCsbc.worker_set_flagsasbuh1hhjCubj8)}(h h]h }(hj]ChhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjCubj)}(hjah]h*}(hjkChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubjO)}(hworkerh]hworker}(hjxChhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjCubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjCubj)}(hunsigned int flagsh](j)}(hunsignedh]hunsigned}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubj8)}(h h]h }(hjChhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjCubj)}(hinth]hint}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubj8)}(h h]h }(hjChhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjCubjO)}(hflagsh]hflags}(hjChhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjCubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjCubeh}(h]h ]h"]h$]h&]jjuh1jhjBhhhjBhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjBhhhjBhMubah}(h]jBah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjBhMhjBhhubj{)}(hhh]h)}(h2set worker flags and adjust nr_running accordinglyh]h2set worker flags and adjust nr_running accordingly}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjChhubah}(h]h ]h"]h$]h&]uh1jzhjBhhhjBhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj Djj Djjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct worker *worker`` self ``unsigned int flags`` flags to set **Description** Set **flags** in **worker->flags** and adjust nr_running accordingly.h](h)}(h**Parameters**h]j)}(hjDh]h Parameters}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjDubjS)}(hhh](jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hj4Dh]hstruct worker *worker}(hj6DhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2Dubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj.Dubjw)}(hhh]h)}(hselfh]hself}(hjMDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIDhMhjJDubah}(h]h ]h"]h$]h&]uh1jvhj.Dubeh}(h]h ]h"]h$]h&]uh1jWhjIDhMhj+DubjX)}(h$``unsigned int flags`` flags to set h](j^)}(h``unsigned int flags``h]j)}(hjmDh]hunsigned int flags}(hjoDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkDubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjgDubjw)}(hhh]h)}(h flags to seth]h flags to set}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjDhMhjDubah}(h]h ]h"]h$]h&]uh1jvhjgDubeh}(h]h ]h"]h$]h&]uh1jWhjDhMhj+Dubeh}(h]h ]h"]h$]h&]uh1jRhjDubh)}(h**Description**h]j)}(hjDh]h Description}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjDubh)}(hESet **flags** in **worker->flags** and adjust nr_running accordingly.h](hSet }(hjDhhhNhNubj)}(h **flags**h]hflags}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubh in }(hjDhhhNhNubj)}(h**worker->flags**h]h worker->flags}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubh# and adjust nr_running accordingly.}(hjDhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjDubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_clr_flags (C function)c.worker_clr_flagshNtauh1jhjhhhNhNubj)}(hhh](j)}(hAvoid worker_clr_flags (struct worker *worker, unsigned int flags)h]j )}(h@void worker_clr_flags(struct worker *worker, unsigned int flags)h](j)}(hvoidh]hvoid}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj EhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj EhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj EhhhjEhMubjI)}(hworker_clr_flagsh]jO)}(hworker_clr_flagsh]hworker_clr_flags}(hj2EhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj.Eubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj EhhhjEhMubj)}(h+(struct worker *worker, unsigned int flags)h](j)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hjNEhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjJEubj8)}(h h]h }(hj[EhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjJEubh)}(hhh]jO)}(hworkerh]hworker}(hjlEhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjiEubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjnEmodnameN classnameNjojr)}ju]jx)}jkj4Esbc.worker_clr_flagsasbuh1hhjJEubj8)}(h h]h }(hjEhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjJEubj)}(hjah]h*}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJEubjO)}(hworkerh]hworker}(hjEhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjJEubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjFEubj)}(hunsigned int flagsh](j)}(hunsignedh]hunsigned}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubj8)}(h h]h }(hjEhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjEubj)}(hinth]hint}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubj8)}(h h]h }(hjEhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjEubjO)}(hflagsh]hflags}(hjEhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjEubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjFEubeh}(h]h ]h"]h$]h&]jjuh1jhj EhhhjEhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj EhhhjEhMubah}(h]jEah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjEhMhjEhhubj{)}(hhh]h)}(h4clear worker flags and adjust nr_running accordinglyh]h4clear worker flags and adjust nr_running accordingly}(hj"FhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjFhhubah}(h]h ]h"]h$]h&]uh1jzhjEhhhjEhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj:Fjj:Fjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct worker *worker`` self ``unsigned int flags`` flags to clear **Description** Clear **flags** in **worker->flags** and adjust nr_running accordingly.h](h)}(h**Parameters**h]j)}(hjDFh]h Parameters}(hjFFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBFubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj>FubjS)}(hhh](jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hjcFh]hstruct worker *worker}(hjeFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaFubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj]Fubjw)}(hhh]h)}(hselfh]hself}(hj|FhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjxFhMhjyFubah}(h]h ]h"]h$]h&]uh1jvhj]Fubeh}(h]h ]h"]h$]h&]uh1jWhjxFhMhjZFubjX)}(h&``unsigned int flags`` flags to clear h](j^)}(h``unsigned int flags``h]j)}(hjFh]hunsigned int flags}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjFubjw)}(hhh]h)}(hflags to clearh]hflags to clear}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhMhjFubah}(h]h ]h"]h$]h&]uh1jvhjFubeh}(h]h ]h"]h$]h&]uh1jWhjFhMhjZFubeh}(h]h ]h"]h$]h&]uh1jRhj>Fubh)}(h**Description**h]j)}(hjFh]h Description}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj>Fubh)}(hGClear **flags** in **worker->flags** and adjust nr_running accordingly.h](hClear }(hjFhhhNhNubj)}(h **flags**h]hflags}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh in }(hjFhhhNhNubj)}(h**worker->flags**h]h worker->flags}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh# and adjust nr_running accordingly.}(hjFhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj>Fubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_enter_idle (C function)c.worker_enter_idlehNtauh1jhjhhhNhNubj)}(hhh](j)}(h.void worker_enter_idle (struct worker *worker)h]j )}(h-void worker_enter_idle(struct worker *worker)h](j)}(hvoidh]hvoid}(hj@GhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjlock).h](h)}(h**Parameters**h]j)}(hj"Hh]h Parameters}(hj$HhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj Hubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjHubjS)}(hhh]jX)}(h>``struct worker *worker`` worker which is entering idle state h](j^)}(h``struct worker *worker``h]j)}(hjAHh]hstruct worker *worker}(hjCHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?Hubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj;Hubjw)}(hhh]h)}(h#worker which is entering idle stateh]h#worker which is entering idle state}(hjZHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjVHhMhjWHubah}(h]h ]h"]h$]h&]uh1jvhj;Hubeh}(h]h ]h"]h$]h&]uh1jWhjVHhMhj8Hubah}(h]h ]h"]h$]h&]uh1jRhjHubh)}(h**Description**h]j)}(hj|Hh]h Description}(hj~HhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjzHubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjHubh)}(hM**worker** is entering idle state. Update stats and idle timer if necessary.h](j)}(h **worker**h]hworker}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubhC is entering idle state. Update stats and idle timer if necessary.}(hjHhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjHubh)}(h'LOCKING: raw_spin_lock_irq(pool->lock).h]h'LOCKING: raw_spin_lock_irq(pool->lock).}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjHubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_leave_idle (C function)c.worker_leave_idlehNtauh1jhjhhhNhNubj)}(hhh](j)}(h.void worker_leave_idle (struct worker *worker)h]j )}(h-void worker_leave_idle(struct worker *worker)h](j)}(hvoidh]hvoid}(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjHhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM)ubj8)}(h h]h }(hjHhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjHhhhjHhM)ubjI)}(hworker_leave_idleh]jO)}(hworker_leave_idleh]hworker_leave_idle}(hjHhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjHubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjHhhhjHhM)ubj)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hjIhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjIubj8)}(h h]h }(hj(IhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjIubh)}(hhh]jO)}(hworkerh]hworker}(hj9IhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj6Iubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj;ImodnameN classnameNjojr)}ju]jx)}jkjIsbc.worker_leave_idleasbuh1hhjIubj8)}(h h]h }(hjYIhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjIubj)}(hjah]h*}(hjgIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubjO)}(hworkerh]hworker}(hjtIhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjIubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjIubah}(h]h ]h"]h$]h&]jjuh1jhjHhhhjHhM)ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjHhhhjHhM)ubah}(h]jHah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjHhM)hjHhhubj{)}(hhh]h)}(hleave idle stateh]hleave idle state}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM!hjIhhubah}(h]h ]h"]h$]h&]uh1jzhjHhhhjHhM)ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjIjjIjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct worker *worker`` worker which is leaving idle state **Description** **worker** is leaving idle state. Update stats. LOCKING: raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjIh]h Parameters}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM%hjIubjS)}(hhh]jX)}(h=``struct worker *worker`` worker which is leaving idle state h](j^)}(h``struct worker *worker``h]j)}(hjIh]hstruct worker *worker}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM"hjIubjw)}(hhh]h)}(h"worker which is leaving idle stateh]h"worker which is leaving idle state}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIhM"hjIubah}(h]h ]h"]h$]h&]uh1jvhjIubeh}(h]h ]h"]h$]h&]uh1jWhjIhM"hjIubah}(h]h ]h"]h$]h&]uh1jRhjIubh)}(h**Description**h]j)}(hjJh]h Description}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM$hjIubh)}(h0**worker** is leaving idle state. Update stats.h](j)}(h **worker**h]hworker}(hj4JhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0Jubh& is leaving idle state. Update stats.}(hj0JhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM$hjIubh)}(h'LOCKING: raw_spin_lock_irq(pool->lock).h]h'LOCKING: raw_spin_lock_irq(pool->lock).}(hjMJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM&hjIubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j'find_worker_executing_work (C function)c.find_worker_executing_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h_struct worker * find_worker_executing_work (struct worker_pool *pool, struct work_struct *work)h]j )}(h]struct worker *find_worker_executing_work(struct worker_pool *pool, struct work_struct *work)h](j&)}(hj)h]hstruct}(hj|JhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjxJhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMUubj8)}(h h]h }(hjJhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjxJhhhjJhMUubh)}(hhh]jO)}(hworkerh]hworker}(hjJhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjJubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjJmodnameN classnameNjojr)}ju]jx)}jkfind_worker_executing_worksbc.find_worker_executing_workasbuh1hhjxJhhhjJhMUubj8)}(h h]h }(hjJhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjxJhhhjJhMUubj)}(hjah]h*}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxJhhhjJhMUubjI)}(hfind_worker_executing_workh]jO)}(hjJh]hfind_worker_executing_work}(hjJhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjJubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjxJhhhjJhMUubj)}(h4(struct worker_pool *pool, struct work_struct *work)h](j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hjJhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjJubj8)}(h h]h }(hjKhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjJubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hjKhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjKubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjKmodnameN classnameNjojr)}ju]jJc.find_worker_executing_workasbuh1hhjJubj8)}(h h]h }(hj2KhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjJubj)}(hjah]h*}(hj@KhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJubjO)}(hpoolh]hpool}(hjMKhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjJubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjJubj)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjfKhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjbKubj8)}(h h]h }(hjsKhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjbKubh)}(hhh]jO)}(h work_structh]h work_struct}(hjKhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjKubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjKmodnameN classnameNjojr)}ju]jJc.find_worker_executing_workasbuh1hhjbKubj8)}(h h]h }(hjKhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjbKubj)}(hjah]h*}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbKubjO)}(hworkh]hwork}(hjKhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjbKubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjJubeh}(h]h ]h"]h$]h&]jjuh1jhjxJhhhjJhMUubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjtJhhhjJhMUubah}(h]joJah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjJhMUhjqJhhubj{)}(hhh]h)}(h%find worker which is executing a workh]h%find worker which is executing a work}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM5hjKhhubah}(h]h ]h"]h$]h&]uh1jzhjqJhhhjJhMUubeh}(h]h ](jfunctioneh"]h$]h&]jjjjKjjKjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct worker_pool *pool`` pool of interest ``struct work_struct *work`` work to find worker for **Description** Find a worker which is executing **work** on **pool** by searching **pool->busy_hash** which is keyed by the address of **work**. For a worker to match, its current execution should match the address of **work** and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed. This is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency. This function checks the work item address and work function to avoid false positives. Note that this isn't complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there's only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function. **Context** raw_spin_lock_irq(pool->lock). **Return** Pointer to worker which is executing **work** if found, ``NULL`` otherwise.h](h)}(h**Parameters**h]j)}(hj Lh]h Parameters}(hj LhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9hjLubjS)}(hhh](jX)}(h.``struct worker_pool *pool`` pool of interest h](j^)}(h``struct worker_pool *pool``h]j)}(hj(Lh]hstruct worker_pool *pool}(hj*LhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&Lubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6hj"Lubjw)}(hhh]h)}(hpool of interesth]hpool of interest}(hjALhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=LhM6hj>Lubah}(h]h ]h"]h$]h&]uh1jvhj"Lubeh}(h]h ]h"]h$]h&]uh1jWhj=LhM6hjLubjX)}(h5``struct work_struct *work`` work to find worker for h](j^)}(h``struct work_struct *work``h]j)}(hjaLh]hstruct work_struct *work}(hjcLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_Lubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM7hj[Lubjw)}(hhh]h)}(hwork to find worker forh]hwork to find worker for}(hjzLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjvLhM7hjwLubah}(h]h ]h"]h$]h&]uh1jvhj[Lubeh}(h]h ]h"]h$]h&]uh1jWhjvLhM7hjLubeh}(h]h ]h"]h$]h&]uh1jRhjLubh)}(h**Description**h]j)}(hjLh]h Description}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9hjLubh)}(hXrFind a worker which is executing **work** on **pool** by searching **pool->busy_hash** which is keyed by the address of **work**. For a worker to match, its current execution should match the address of **work** and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed.h](h!Find a worker which is executing }(hjLhhhNhNubj)}(h**work**h]hwork}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh on }(hjLhhhNhNubj)}(h**pool**h]hpool}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh by searching }(hjLhhhNhNubj)}(h**pool->busy_hash**h]hpool->busy_hash}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh" which is keyed by the address of }(hjLhhhNhNubj)}(h**work**h]hwork}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubhL. For a worker to match, its current execution should match the address of }(hjLhhhNhNubj)}(h**work**h]hwork}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed.}(hjLhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9hjLubh)}(hXThis is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency.h]hXThis is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency.}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM@hjLubh)}(hXThis function checks the work item address and work function to avoid false positives. Note that this isn't complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there's only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function.h]hXThis function checks the work item address and work function to avoid false positives. Note that this isn’t complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there’s only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function.}(hj*MhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMGhjLubh)}(h **Context**h]j)}(hj;Mh]hContext}(hj=MhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9Mubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMNhjLubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjQMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMNhjLubh)}(h **Return**h]j)}(hjbMh]hReturn}(hjdMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`Mubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMPhjLubh)}(hKPointer to worker which is executing **work** if found, ``NULL`` otherwise.h](h%Pointer to worker which is executing }(hjxMhhhNhNubj)}(h**work**h]hwork}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxMubh if found, }(hjxMhhhNhNubj)}(h``NULL``h]hNULL}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxMubh otherwise.}(hjxMhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMQhjLubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jmove_linked_works (C function)c.move_linked_workshNtauh1jhjhhhNhNubj)}(hhh](j)}(hevoid move_linked_works (struct work_struct *work, struct list_head *head, struct work_struct **nextp)h]j )}(hdvoid move_linked_works(struct work_struct *work, struct list_head *head, struct work_struct **nextp)h](j)}(hvoidh]hvoid}(hjMhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMqubj8)}(h h]h }(hjMhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjMhhhjMhMqubjI)}(hmove_linked_worksh]jO)}(hmove_linked_worksh]hmove_linked_works}(hjMhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjMubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjMhhhjMhMqubj)}(hN(struct work_struct *work, struct list_head *head, struct work_struct **nextp)h](j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjNhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjNubj8)}(h h]h }(hjNhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjNubh)}(hhh]jO)}(h work_structh]h work_struct}(hj&NhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj#Nubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj(NmodnameN classnameNjojr)}ju]jx)}jkjMsbc.move_linked_worksasbuh1hhjNubj8)}(h h]h }(hjFNhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjNubj)}(hjah]h*}(hjTNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubjO)}(hworkh]hwork}(hjaNhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjNubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjNubj)}(hstruct list_head *headh](j&)}(hj)h]hstruct}(hjzNhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjvNubj8)}(h h]h }(hjNhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjvNubh)}(hhh]jO)}(h list_headh]h list_head}(hjNhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjNubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjNmodnameN classnameNjojr)}ju]jBNc.move_linked_worksasbuh1hhjvNubj8)}(h h]h }(hjNhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjvNubj)}(hjah]h*}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjvNubjO)}(hheadh]hhead}(hjNhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjvNubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjNubj)}(hstruct work_struct **nextph](j&)}(hj)h]hstruct}(hjNhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjNubj8)}(h h]h }(hjNhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjNubh)}(hhh]jO)}(h work_structh]h work_struct}(hjOhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjOubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj OmodnameN classnameNjojr)}ju]jBNc.move_linked_worksasbuh1hhjNubj8)}(h h]h }(hj&OhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjNubj)}(hjah]h*}(hj4OhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubj)}(hjah]h*}(hjAOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubjO)}(hnextph]hnextp}(hjNOhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjNubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjNubeh}(h]h ]h"]h$]h&]jjuh1jhjMhhhjMhMqubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjMhhhjMhMqubah}(h]jMah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjMhMqhjMhhubj{)}(hhh]h)}(hmove linked works to a listh]hmove linked works to a list}(hjxOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMdhjuOhhubah}(h]h ]h"]h$]h&]uh1jzhjMhhhjMhMqubeh}(h]h ](jfunctioneh"]h$]h&]jjjjOjjOjjjuh1jhhhjhNhNubj)}(hX **Parameters** ``struct work_struct *work`` start of series of works to be scheduled ``struct list_head *head`` target list to append **work** to ``struct work_struct **nextp`` out parameter for nested worklist walking **Description** Schedule linked works starting from **work** to **head**. Work series to be scheduled starts at **work** and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on **nextp**. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjOh]h Parameters}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhhjOubjS)}(hhh](jX)}(hF``struct work_struct *work`` start of series of works to be scheduled h](j^)}(h``struct work_struct *work``h]j)}(hjOh]hstruct work_struct *work}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMehjOubjw)}(hhh]h)}(h(start of series of works to be scheduledh]h(start of series of works to be scheduled}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjOhMehjOubah}(h]h ]h"]h$]h&]uh1jvhjOubeh}(h]h ]h"]h$]h&]uh1jWhjOhMehjOubjX)}(h=``struct list_head *head`` target list to append **work** to h](j^)}(h``struct list_head *head``h]j)}(hjOh]hstruct list_head *head}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMfhjOubjw)}(hhh]h)}(h!target list to append **work** toh](htarget list to append }(hj PhhhNhNubj)}(h**work**h]hwork}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj Pubh to}(hj PhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjPhMfhjPubah}(h]h ]h"]h$]h&]uh1jvhjOubeh}(h]h ]h"]h$]h&]uh1jWhjPhMfhjOubjX)}(hI``struct work_struct **nextp`` out parameter for nested worklist walking h](j^)}(h``struct work_struct **nextp``h]j)}(hj=Ph]hstruct work_struct **nextp}(hj?PhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;Pubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMghj7Pubjw)}(hhh]h)}(h)out parameter for nested worklist walkingh]h)out parameter for nested worklist walking}(hjVPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjRPhMghjSPubah}(h]h ]h"]h$]h&]uh1jvhj7Pubeh}(h]h ]h"]h$]h&]uh1jWhjRPhMghjOubeh}(h]h ]h"]h$]h&]uh1jRhjOubh)}(h**Description**h]j)}(hjxPh]h Description}(hjzPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvPubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMihjOubh)}(hSchedule linked works starting from **work** to **head**. Work series to be scheduled starts at **work** and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on **nextp**.h](h$Schedule linked works starting from }(hjPhhhNhNubj)}(h**work**h]hwork}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubh to }(hjPhhhNhNubj)}(h**head**h]hhead}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubh(. Work series to be scheduled starts at }(hjPhhhNhNubj)}(h**work**h]hwork}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubht and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on }(hjPhhhNhNubj)}(h **nextp**h]hnextp}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubh.}(hjPhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMihjOubh)}(h **Context**h]j)}(hjPh]hContext}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMnhjOubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMnhjOubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jassign_work (C function) c.assign_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h^bool assign_work (struct work_struct *work, struct worker *worker, struct work_struct **nextp)h]j )}(h]bool assign_work(struct work_struct *work, struct worker *worker, struct work_struct **nextp)h](j)}(hj7&h]hbool}(hj,QhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(QhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj:QhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj(Qhhhj9QhMubjI)}(h assign_workh]jO)}(h assign_workh]h assign_work}(hjLQhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjHQubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj(Qhhhj9QhMubj)}(hM(struct work_struct *work, struct worker *worker, struct work_struct **nextp)h](j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhQhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjdQubj8)}(h h]h }(hjuQhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjdQubh)}(hhh]jO)}(h work_structh]h work_struct}(hjQhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjQubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjQmodnameN classnameNjojr)}ju]jx)}jkjNQsb c.assign_workasbuh1hhjdQubj8)}(h h]h }(hjQhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjdQubj)}(hjah]h*}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjdQubjO)}(hworkh]hwork}(hjQhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjdQubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`Qubj)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hjQhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjQubj8)}(h h]h }(hjQhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjQubh)}(hhh]jO)}(hworkerh]hworker}(hjQhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjQubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjQmodnameN classnameNjojr)}ju]jQ c.assign_workasbuh1hhjQubj8)}(h h]h }(hjRhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjQubj)}(hjah]h*}(hj$RhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubjO)}(hworkerh]hworker}(hj1RhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjQubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`Qubj)}(hstruct work_struct **nextph](j&)}(hj)h]hstruct}(hjJRhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjFRubj8)}(h h]h }(hjWRhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjFRubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhRhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjeRubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjjRmodnameN classnameNjojr)}ju]jQ c.assign_workasbuh1hhjFRubj8)}(h h]h }(hjRhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjFRubj)}(hjah]h*}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFRubj)}(hjah]h*}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFRubjO)}(hnextph]hnextp}(hjRhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjFRubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`Qubeh}(h]h ]h"]h$]h&]jjuh1jhj(Qhhhj9QhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj$Qhhhj9QhMubah}(h]jQah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj9QhMhj!Qhhubj{)}(hhh]h)}(h8assign a work item and its linked work items to a workerh]h8assign a work item and its linked work items to a worker}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjRhhubah}(h]h ]h"]h$]h&]uh1jzhj!Qhhhj9QhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjRjjRjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work to assign ``struct worker *worker`` worker to assign to ``struct work_struct **nextp`` out parameter for nested worklist walking **Description** Assign **work** and its linked work items to **worker**. If **work** is already being executed by another worker in the same pool, it'll be punted there. If **nextp** is not NULL, it's updated to point to the next work of the last scheduled work. This allows assign_work() to be nested inside list_for_each_entry_safe(). Returns ``true`` if **work** was successfully assigned to **worker**. ``false`` if **work** was punted to another worker already executing it.h](h)}(h**Parameters**h]j)}(hjRh]h Parameters}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjRubjS)}(hhh](jX)}(h,``struct work_struct *work`` work to assign h](j^)}(h``struct work_struct *work``h]j)}(hjSh]hstruct work_struct *work}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjSubjw)}(hhh]h)}(hwork to assignh]hwork to assign}(hj2ShhhNhNubah}(h]h ]h"]h$]h&]uh1hhj.ShMhj/Subah}(h]h ]h"]h$]h&]uh1jvhjSubeh}(h]h ]h"]h$]h&]uh1jWhj.ShMhjSubjX)}(h.``struct worker *worker`` worker to assign to h](j^)}(h``struct worker *worker``h]j)}(hjRSh]hstruct worker *worker}(hjTShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPSubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjLSubjw)}(hhh]h)}(hworker to assign toh]hworker to assign to}(hjkShhhNhNubah}(h]h ]h"]h$]h&]uh1hhjgShMhjhSubah}(h]h ]h"]h$]h&]uh1jvhjLSubeh}(h]h ]h"]h$]h&]uh1jWhjgShMhjSubjX)}(hI``struct work_struct **nextp`` out parameter for nested worklist walking h](j^)}(h``struct work_struct **nextp``h]j)}(hjSh]hstruct work_struct **nextp}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjSubjw)}(hhh]h)}(h)out parameter for nested worklist walkingh]h)out parameter for nested worklist walking}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhjShMhjSubah}(h]h ]h"]h$]h&]uh1jvhjSubeh}(h]h ]h"]h$]h&]uh1jWhjShMhjSubeh}(h]h ]h"]h$]h&]uh1jRhjRubh)}(h**Description**h]j)}(hjSh]h Description}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjRubh)}(hAssign **work** and its linked work items to **worker**. If **work** is already being executed by another worker in the same pool, it'll be punted there.h](hAssign }(hjShhhNhNubj)}(h**work**h]hwork}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh and its linked work items to }(hjShhhNhNubj)}(h **worker**h]hworker}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh. If }(hjShhhNhNubj)}(h**work**h]hwork}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubhW is already being executed by another worker in the same pool, it’ll be punted there.}(hjShhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjRubh)}(hIf **nextp** is not NULL, it's updated to point to the next work of the last scheduled work. This allows assign_work() to be nested inside list_for_each_entry_safe().h](hIf }(hj!ThhhNhNubj)}(h **nextp**h]hnextp}(hj)ThhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!Tubh is not NULL, it’s updated to point to the next work of the last scheduled work. This allows assign_work() to be nested inside list_for_each_entry_safe().}(hj!ThhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjRubh)}(hReturns ``true`` if **work** was successfully assigned to **worker**. ``false`` if **work** was punted to another worker already executing it.h](hReturns }(hjBThhhNhNubj)}(h``true``h]htrue}(hjJThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBTubh if }(hjBThhhNhNubj)}(h**work**h]hwork}(hj\ThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBTubh was successfully assigned to }(hjBThhhNhNubj)}(h **worker**h]hworker}(hjnThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBTubh. }(hjBThhhNhNubj)}(h ``false``h]hfalse}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBTubh if }hjBTsbj)}(h**work**h]hwork}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBTubh3 was punted to another worker already executing it.}(hjBThhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjRubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jkick_pool (C function) c.kick_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h)bool kick_pool (struct worker_pool *pool)h]j )}(h(bool kick_pool(struct worker_pool *pool)h](j)}(hj7&h]hbool}(hjThhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjThhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjThhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjThhhjThMubjI)}(h kick_poolh]jO)}(h kick_poolh]h kick_pool}(hjThhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjTubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjThhhjThMubj)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hjUhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjUubj8)}(h h]h }(hjUhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjUubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hj%UhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj"Uubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj'UmodnameN classnameNjojr)}ju]jx)}jkjTsb c.kick_poolasbuh1hhjUubj8)}(h h]h }(hjEUhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjUubj)}(hjah]h*}(hjSUhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUubjO)}(hpoolh]hpool}(hj`UhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjUubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjTubah}(h]h ]h"]h$]h&]jjuh1jhjThhhjThMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjThhhjThMubah}(h]jTah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjThMhjThhubj{)}(hhh]h)}(h#wake up an idle worker if necessaryh]h#wake up an idle worker if necessary}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjUhhubah}(h]h ]h"]h$]h&]uh1jzhjThhhjThMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjUjjUjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct worker_pool *pool`` pool to kick **Description** **pool** may have pending work items. Wake up worker if necessary. Returns whether a worker was woken up.h](h)}(h**Parameters**h]j)}(hjUh]h Parameters}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjUubjS)}(hhh]jX)}(h*``struct worker_pool *pool`` pool to kick h](j^)}(h``struct worker_pool *pool``h]j)}(hjUh]hstruct worker_pool *pool}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjUubjw)}(hhh]h)}(h pool to kickh]h pool to kick}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjUhMhjUubah}(h]h ]h"]h$]h&]uh1jvhjUubeh}(h]h ]h"]h$]h&]uh1jWhjUhMhjUubah}(h]h ]h"]h$]h&]uh1jRhjUubh)}(h**Description**h]j)}(hjVh]h Description}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjUubh)}(hi**pool** may have pending work items. Wake up worker if necessary. Returns whether a worker was woken up.h](j)}(h**pool**h]hpool}(hj VhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubha may have pending work items. Wake up worker if necessary. Returns whether a worker was woken up.}(hjVhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjUubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwq_worker_running (C function)c.wq_worker_runninghNtauh1jhjhhhNhNubj)}(hhh](j)}(h1void wq_worker_running (struct task_struct *task)h]j )}(h0void wq_worker_running(struct task_struct *task)h](j)}(hvoidh]hvoid}(hjYVhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUVhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMiubj8)}(h h]h }(hjhVhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjUVhhhjgVhMiubjI)}(hwq_worker_runningh]jO)}(hwq_worker_runningh]hwq_worker_running}(hjzVhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjvVubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjUVhhhjgVhMiubj)}(h(struct task_struct *task)h]j)}(hstruct task_struct *taskh](j&)}(hj)h]hstruct}(hjVhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjVubj8)}(h h]h }(hjVhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjVubh)}(hhh]jO)}(h task_structh]h task_struct}(hjVhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjVubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjVmodnameN classnameNjojr)}ju]jx)}jkj|Vsbc.wq_worker_runningasbuh1hhjVubj8)}(h h]h }(hjVhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjVubj)}(hjah]h*}(hjVhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjVubjO)}(htaskh]htask}(hjVhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjVubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjVubah}(h]h ]h"]h$]h&]jjuh1jhjUVhhhjgVhMiubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjQVhhhjgVhMiubah}(h]jLVah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjgVhMihjNVhhubj{)}(hhh]h)}(ha worker is running againh]ha worker is running again}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMdhjWhhubah}(h]h ]h"]h$]h&]uh1jzhjNVhhhjgVhMiubeh}(h]h ](jfunctioneh"]h$]h&]jjjj1Wjj1Wjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct task_struct *task`` task waking up **Description** This function is called when a worker returns from schedule()h](h)}(h**Parameters**h]j)}(hj;Wh]h Parameters}(hj=WhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9Wubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhhj5WubjS)}(hhh]jX)}(h,``struct task_struct *task`` task waking up h](j^)}(h``struct task_struct *task``h]j)}(hjZWh]hstruct task_struct *task}(hj\WhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXWubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMehjTWubjw)}(hhh]h)}(htask waking uph]htask waking up}(hjsWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjoWhMehjpWubah}(h]h ]h"]h$]h&]uh1jvhjTWubeh}(h]h ]h"]h$]h&]uh1jWhjoWhMehjQWubah}(h]h ]h"]h$]h&]uh1jRhj5Wubh)}(h**Description**h]j)}(hjWh]h Description}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjWubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMghj5Wubh)}(h=This function is called when a worker returns from schedule()h]h=This function is called when a worker returns from schedule()}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMghj5Wubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwq_worker_sleeping (C function)c.wq_worker_sleepinghNtauh1jhjhhhNhNubj)}(hhh](j)}(h2void wq_worker_sleeping (struct task_struct *task)h]j )}(h1void wq_worker_sleeping(struct task_struct *task)h](j)}(hvoidh]hvoid}(hjWhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjWhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjWhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjWhhhjWhMubjI)}(hwq_worker_sleepingh]jO)}(hwq_worker_sleepingh]hwq_worker_sleeping}(hjWhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjWubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjWhhhjWhMubj)}(h(struct task_struct *task)h]j)}(hstruct task_struct *taskh](j&)}(hj)h]hstruct}(hjXhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjXubj8)}(h h]h }(hj$XhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjXubh)}(hhh]jO)}(h task_structh]h task_struct}(hj5XhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj2Xubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj7XmodnameN classnameNjojr)}ju]jx)}jkjWsbc.wq_worker_sleepingasbuh1hhjXubj8)}(h h]h }(hjUXhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjXubj)}(hjah]h*}(hjcXhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjXubjO)}(htaskh]htask}(hjpXhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjXubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjXubah}(h]h ]h"]h$]h&]jjuh1jhjWhhhjWhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjWhhhjWhMubah}(h]jWah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjWhMhjWhhubj{)}(hhh]h)}(ha worker is going to sleeph]ha worker is going to sleep}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjXhhubah}(h]h ]h"]h$]h&]uh1jzhjWhhhjWhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjXjjXjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct task_struct *task`` task going to sleep **Description** This function is called from schedule() when a busy worker is going to sleep.h](h)}(h**Parameters**h]j)}(hjXh]h Parameters}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjXubjS)}(hhh]jX)}(h1``struct task_struct *task`` task going to sleep h](j^)}(h``struct task_struct *task``h]j)}(hjXh]hstruct task_struct *task}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjXubjw)}(hhh]h)}(htask going to sleeph]htask going to sleep}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjXhMhjXubah}(h]h ]h"]h$]h&]uh1jvhjXubeh}(h]h ]h"]h$]h&]uh1jWhjXhMhjXubah}(h]h ]h"]h$]h&]uh1jRhjXubh)}(h**Description**h]j)}(hjYh]h Description}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjXubh)}(hMThis function is called from schedule() when a busy worker is going to sleep.h]hMThis function is called from schedule() when a busy worker is going to sleep.}(hj,YhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjXubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwq_worker_tick (C function)c.wq_worker_tickhNtauh1jhjhhhNhNubj)}(hhh](j)}(h.void wq_worker_tick (struct task_struct *task)h]j )}(h-void wq_worker_tick(struct task_struct *task)h](j)}(hvoidh]hvoid}(hj[YhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjWYhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjjYhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjWYhhhjiYhMubjI)}(hwq_worker_tickh]jO)}(hwq_worker_tickh]hwq_worker_tick}(hj|YhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjxYubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjWYhhhjiYhMubj)}(h(struct task_struct *task)h]j)}(hstruct task_struct *taskh](j&)}(hj)h]hstruct}(hjYhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjYubj8)}(h h]h }(hjYhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjYubh)}(hhh]jO)}(h task_structh]h task_struct}(hjYhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjYubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjYmodnameN classnameNjojr)}ju]jx)}jkj~Ysbc.wq_worker_tickasbuh1hhjYubj8)}(h h]h }(hjYhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjYubj)}(hjah]h*}(hjYhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjYubjO)}(htaskh]htask}(hjYhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjYubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjYubah}(h]h ]h"]h$]h&]jjuh1jhjWYhhhjiYhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjSYhhhjiYhMubah}(h]jNYah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjiYhMhjPYhhubj{)}(hhh]h)}(h4a scheduler tick occurred while a kworker is runningh]h4a scheduler tick occurred while a kworker is running}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjZhhubah}(h]h ]h"]h$]h&]uh1jzhjPYhhhjiYhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj3Zjj3Zjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct task_struct *task`` task currently running **Description** Called from sched_tick(). We're in the IRQ context and the current worker's fields which follow the 'K' locking rule can be accessed safely.h](h)}(h**Parameters**h]j)}(hj=Zh]h Parameters}(hj?ZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;Zubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj7ZubjS)}(hhh]jX)}(h4``struct task_struct *task`` task currently running h](j^)}(h``struct task_struct *task``h]j)}(hj\Zh]hstruct task_struct *task}(hj^ZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZZubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjVZubjw)}(hhh]h)}(htask currently runningh]htask currently running}(hjuZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjqZhMhjrZubah}(h]h ]h"]h$]h&]uh1jvhjVZubeh}(h]h ]h"]h$]h&]uh1jWhjqZhMhjSZubah}(h]h ]h"]h$]h&]uh1jRhj7Zubh)}(h**Description**h]j)}(hjZh]h Description}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj7Zubh)}(hCalled from sched_tick(). We're in the IRQ context and the current worker's fields which follow the 'K' locking rule can be accessed safely.h]hCalled from sched_tick(). We’re in the IRQ context and the current worker’s fields which follow the ‘K’ locking rule can be accessed safely.}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj7Zubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j wq_worker_last_func (C function)c.wq_worker_last_funchNtauh1jhjhhhNhNubj)}(hhh](j)}(h:work_func_t wq_worker_last_func (struct task_struct *task)h]j )}(h9work_func_t wq_worker_last_func(struct task_struct *task)h](h)}(hhh]jO)}(h work_func_th]h work_func_t}(hjZhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjZubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjZmodnameN classnameNjojr)}ju]jx)}jkwq_worker_last_funcsbc.wq_worker_last_funcasbuh1hhjZhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj[hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjZhhhj[hMubjI)}(hwq_worker_last_funch]jO)}(hjZh]hwq_worker_last_func}(hj[hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj[ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjZhhhj[hMubj)}(h(struct task_struct *task)h]j)}(hstruct task_struct *taskh](j&)}(hj)h]hstruct}(hj.[hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj*[ubj8)}(h h]h }(hj;[hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj*[ubh)}(hhh]jO)}(h task_structh]h task_struct}(hjL[hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjI[ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjN[modnameN classnameNjojr)}ju]jZc.wq_worker_last_funcasbuh1hhj*[ubj8)}(h h]h }(hjj[hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj*[ubj)}(hjah]h*}(hjx[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*[ubjO)}(htaskh]htask}(hj[hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj*[ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj&[ubah}(h]h ]h"]h$]h&]jjuh1jhjZhhhj[hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjZhhhj[hMubah}(h]jZah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj[hMhjZhhubj{)}(hhh]h)}(h$retrieve worker's last work functionh]h&retrieve worker’s last work function}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[hhubah}(h]h ]h"]h$]h&]uh1jzhjZhhhj[hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj[jj[jjjuh1jhhhjhNhNubj)}(hXw**Parameters** ``struct task_struct *task`` Task to retrieve last work function of. **Description** Determine the last function a worker executed. This is called from the scheduler to get a worker's last known identity. This function is called during schedule() when a kworker is going to sleep. It's used by psi to identify aggregation workers during dequeuing, to allow periodic aggregation to shut-off when that worker is the last task in the system or cgroup to go to sleep. As this function doesn't involve any workqueue-related locking, it only returns stable values when called from inside the scheduler's queuing and dequeuing paths, when **task**, which must be a kworker, is guaranteed to not be processing any works. **Context** raw_spin_lock_irq(rq->lock) **Return** The last work function ``current`` executed as a worker, NULL if it hasn't executed any work yet.h](h)}(h**Parameters**h]j)}(hj[h]h Parameters}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubjS)}(hhh]jX)}(hE``struct task_struct *task`` Task to retrieve last work function of. h](j^)}(h``struct task_struct *task``h]j)}(hj[h]hstruct task_struct *task}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubjw)}(hhh]h)}(h'Task to retrieve last work function of.h]h'Task to retrieve last work function of.}(hj \hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj\hMhj\ubah}(h]h ]h"]h$]h&]uh1jvhj[ubeh}(h]h ]h"]h$]h&]uh1jWhj\hMhj[ubah}(h]h ]h"]h$]h&]uh1jRhj[ubh)}(h**Description**h]j)}(hj+\h]h Description}(hj-\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubh)}(hwDetermine the last function a worker executed. This is called from the scheduler to get a worker's last known identity.h]hyDetermine the last function a worker executed. This is called from the scheduler to get a worker’s last known identity.}(hjA\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubh)}(hXThis function is called during schedule() when a kworker is going to sleep. It's used by psi to identify aggregation workers during dequeuing, to allow periodic aggregation to shut-off when that worker is the last task in the system or cgroup to go to sleep.h]hXThis function is called during schedule() when a kworker is going to sleep. It’s used by psi to identify aggregation workers during dequeuing, to allow periodic aggregation to shut-off when that worker is the last task in the system or cgroup to go to sleep.}(hjP\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubh)}(hAs this function doesn't involve any workqueue-related locking, it only returns stable values when called from inside the scheduler's queuing and dequeuing paths, when **task**, which must be a kworker, is guaranteed to not be processing any works.h](hAs this function doesn’t involve any workqueue-related locking, it only returns stable values when called from inside the scheduler’s queuing and dequeuing paths, when }(hj_\hhhNhNubj)}(h**task**h]htask}(hjg\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_\ubhH, which must be a kworker, is guaranteed to not be processing any works.}(hj_\hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubh)}(h **Context**h]j)}(hj\h]hContext}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubh)}(hraw_spin_lock_irq(rq->lock)h]hraw_spin_lock_irq(rq->lock)}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubh)}(h **Return**h]j)}(hj\h]hReturn}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubh)}(haThe last work function ``current`` executed as a worker, NULL if it hasn't executed any work yet.h](hThe last work function }(hj\hhhNhNubj)}(h ``current``h]hcurrent}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubhA executed as a worker, NULL if it hasn’t executed any work yet.}(hj\hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwq_node_nr_active (C function)c.wq_node_nr_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(hTstruct wq_node_nr_active * wq_node_nr_active (struct workqueue_struct *wq, int node)h]j )}(hRstruct wq_node_nr_active *wq_node_nr_active(struct workqueue_struct *wq, int node)h](j&)}(hj)h]hstruct}(hj]hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj\hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj]hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj\hhhj ]hMubh)}(hhh]jO)}(hwq_node_nr_activeh]hwq_node_nr_active}(hj]hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj]ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj!]modnameN classnameNjojr)}ju]jx)}jkwq_node_nr_activesbc.wq_node_nr_activeasbuh1hhj\hhhj ]hMubj8)}(h h]h }(hj@]hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj\hhhj ]hMubj)}(hjah]h*}(hjN]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\hhhj ]hMubjI)}(hwq_node_nr_activeh]jO)}(hj=]h]hwq_node_nr_active}(hj_]hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj[]ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj\hhhj ]hMubj)}(h'(struct workqueue_struct *wq, int node)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjz]hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjv]ubj8)}(h h]h }(hj]hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjv]ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj]hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj]ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj]modnameN classnameNjojr)}ju]j;]c.wq_node_nr_activeasbuh1hhjv]ubj8)}(h h]h }(hj]hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjv]ubj)}(hjah]h*}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjv]ubjO)}(hwqh]hwq}(hj]hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjv]ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjr]ubj)}(hint nodeh](j)}(hinth]hint}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubj8)}(h h]h }(hj]hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj]ubjO)}(hnodeh]hnode}(hj^hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj]ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjr]ubeh}(h]h ]h"]h$]h&]jjuh1jhj\hhhj ]hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj\hhhj ]hMubah}(h]j\ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj ]hMhj\hhubj{)}(hhh]h)}(h"Determine wq_node_nr_active to useh]h"Determine wq_node_nr_active to use}(hj0^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj-^hhubah}(h]h ]h"]h$]h&]uh1jzhj\hhhj ]hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjH^jjH^jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue of interest ``int node`` NUMA node, can be ``NUMA_NO_NODE`` **Description** Determine wq_node_nr_active to use for **wq** on **node**. Returns: - ``NULL`` for per-cpu workqueues as they don't need to use shared nr_active. - node_nr_active[nr_node_ids] if **node** is ``NUMA_NO_NODE``. - Otherwise, node_nr_active[**node**].h](h)}(h**Parameters**h]j)}(hjR^h]h Parameters}(hjT^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjP^ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjL^ubjS)}(hhh](jX)}(h6``struct workqueue_struct *wq`` workqueue of interest h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjq^h]hstruct workqueue_struct *wq}(hjs^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjo^ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjk^ubjw)}(hhh]h)}(hworkqueue of interesth]hworkqueue of interest}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj^hMhj^ubah}(h]h ]h"]h$]h&]uh1jvhjk^ubeh}(h]h ]h"]h$]h&]uh1jWhj^hMhjh^ubjX)}(h0``int node`` NUMA node, can be ``NUMA_NO_NODE`` h](j^)}(h ``int node``h]j)}(hj^h]hint node}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj^ubjw)}(hhh]h)}(h"NUMA node, can be ``NUMA_NO_NODE``h](hNUMA node, can be }(hj^hhhNhNubj)}(h``NUMA_NO_NODE``h]h NUMA_NO_NODE}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubeh}(h]h ]h"]h$]h&]uh1hhj^hMhj^ubah}(h]h ]h"]h$]h&]uh1jvhj^ubeh}(h]h ]h"]h$]h&]uh1jWhj^hMhjh^ubeh}(h]h ]h"]h$]h&]uh1jRhjL^ubh)}(h**Description**h]j)}(hj^h]h Description}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjL^ubh)}(hCDetermine wq_node_nr_active to use for **wq** on **node**. Returns:h](h'Determine wq_node_nr_active to use for }(hj _hhhNhNubj)}(h**wq**h]hwq}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj _ubh on }(hj _hhhNhNubj)}(h**node**h]hnode}(hj#_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj _ubh . Returns:}(hj _hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjL^ubj )}(hhh](j)}(hL``NULL`` for per-cpu workqueues as they don't need to use shared nr_active. h]h)}(hK``NULL`` for per-cpu workqueues as they don't need to use shared nr_active.h](j)}(h``NULL``h]hNULL}(hjG_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjC_ubhE for per-cpu workqueues as they don’t need to use shared nr_active.}(hjC_hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj?_ubah}(h]h ]h"]h$]h&]uh1jhj<_ubj)}(h=node_nr_active[nr_node_ids] if **node** is ``NUMA_NO_NODE``. h]h)}(h`hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj-`ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjO`hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjL`ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjQ`modnameN classnameNjojr)}ju]jx)}jkj`sbc.wq_update_node_max_activeasbuh1hhj-`ubj8)}(h h]h }(hjo`hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj-`ubj)}(hjah]h*}(hj}`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-`ubjO)}(hwqh]hwq}(hj`hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj-`ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj)`ubj)}(h int off_cpuh](j)}(hinth]hint}(hj`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`ubj8)}(h h]h }(hj`hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj`ubjO)}(hoff_cpuh]hoff_cpu}(hj`hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj`ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj)`ubeh}(h]h ]h"]h$]h&]jjuh1jhj_hhhj`hM%ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj_hhhj`hM%ubah}(h]j_ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj`hM%hj_hhubj{)}(hhh]h)}(h"Update per-node max_actives to useh]h"Update per-node max_actives to use}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj`hhubah}(h]h ]h"]h$]h&]uh1jzhj_hhhj`hM%ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjajjajjjuh1jhhhjhNhNubj)}(hX{**Parameters** ``struct workqueue_struct *wq`` workqueue to update ``int off_cpu`` CPU that's going down, -1 if a CPU is not going down **Description** Update **wq->node_nr_active**[]->max. **wq** must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between **wq->min_active** and max_active.h](h)}(h**Parameters**h]j)}(hj ah]h Parameters}(hj ahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj aubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM!hjaubjS)}(hhh](jX)}(h4``struct workqueue_struct *wq`` workqueue to update h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj*ah]hstruct workqueue_struct *wq}(hj,ahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj(aubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj$aubjw)}(hhh]h)}(hworkqueue to updateh]hworkqueue to update}(hjCahhhNhNubah}(h]h ]h"]h$]h&]uh1hhj?ahMhj@aubah}(h]h ]h"]h$]h&]uh1jvhj$aubeh}(h]h ]h"]h$]h&]uh1jWhj?ahMhj!aubjX)}(hE``int off_cpu`` CPU that's going down, -1 if a CPU is not going down h](j^)}(h``int off_cpu``h]j)}(hjcah]h int off_cpu}(hjeahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaaubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj]aubjw)}(hhh]h)}(h4CPU that's going down, -1 if a CPU is not going downh]h6CPU that’s going down, -1 if a CPU is not going down}(hj|ahhhNhNubah}(h]h ]h"]h$]h&]uh1hhjxahMhjyaubah}(h]h ]h"]h$]h&]uh1jvhj]aubeh}(h]h ]h"]h$]h&]uh1jWhjxahMhj!aubeh}(h]h ]h"]h$]h&]uh1jRhjaubh)}(h**Description**h]j)}(hjah]h Description}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM!hjaubh)}(hUpdate **wq->node_nr_active**[]->max. **wq** must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between **wq->min_active** and max_active.h](hUpdate }(hjahhhNhNubj)}(h%**wq->node_nr_active**[]->max. **wq**h]h!wq->node_nr_active**[]->max. **wq}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubh must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between }(hjahhhNhNubj)}(h**wq->min_active**h]hwq->min_active}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubh and max_active.}(hjahhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM!hjaubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jget_pwq (C function) c.get_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h)void get_pwq (struct pool_workqueue *pwq)h]j )}(h(void get_pwq(struct pool_workqueue *pwq)h](j)}(hvoidh]hvoid}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMWubj8)}(h h]h }(hjbhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjbhhhjbhMWubjI)}(hget_pwqh]jO)}(hget_pwqh]hget_pwq}(hj(bhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj$bubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjbhhhjbhMWubj)}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j&)}(hj)h]hstruct}(hjDbhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj@bubj8)}(h h]h }(hjQbhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj@bubh)}(hhh]jO)}(hpool_workqueueh]hpool_workqueue}(hjbbhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj_bubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjdbmodnameN classnameNjojr)}ju]jx)}jkj*bsb c.get_pwqasbuh1hhj@bubj8)}(h h]h }(hjbhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj@bubj)}(hjah]h*}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@bubjO)}(hpwqh]hpwq}(hjbhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj@bubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjlock.h](h)}(h**Parameters**h]j)}(hjbh]h Parameters}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMUhjbubjS)}(hhh]jX)}(h5``struct pool_workqueue *pwq`` pool_workqueue to get h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjch]hstruct pool_workqueue *pwq}(hj chhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMRhjcubjw)}(hhh]h)}(hpool_workqueue to geth]hpool_workqueue to get}(hj!chhhNhNubah}(h]h ]h"]h$]h&]uh1hhjchMRhjcubah}(h]h ]h"]h$]h&]uh1jvhjcubeh}(h]h ]h"]h$]h&]uh1jWhjchMRhjbubah}(h]h ]h"]h$]h&]uh1jRhjbubh)}(h**Description**h]j)}(hjCch]h Description}(hjEchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAcubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMThjbubh)}(hObtain an extra reference on **pwq**. The caller should guarantee that **pwq** has positive refcnt and be holding the matching pool->lock.h](hObtain an extra reference on }(hjYchhhNhNubj)}(h**pwq**h]hpwq}(hjachhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYcubh$. The caller should guarantee that }(hjYchhhNhNubj)}(h**pwq**h]hpwq}(hjschhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYcubh< has positive refcnt and be holding the matching pool->lock.}(hjYchhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMThjbubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jput_pwq (C function) c.put_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h)void put_pwq (struct pool_workqueue *pwq)h]j )}(h(void put_pwq(struct pool_workqueue *pwq)h](j)}(hvoidh]hvoid}(hjchhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjchhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMeubj8)}(h h]h }(hjchhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjchhhjchMeubjI)}(hput_pwqh]jO)}(hput_pwqh]hput_pwq}(hjchhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjcubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjchhhjchMeubj)}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j&)}(hj)h]hstruct}(hjchhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjcubj8)}(h h]h }(hjchhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjcubh)}(hhh]jO)}(hpool_workqueueh]hpool_workqueue}(hjdhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjdubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj dmodnameN classnameNjojr)}ju]jx)}jkjcsb c.put_pwqasbuh1hhjcubj8)}(h h]h }(hj'dhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjcubj)}(hjah]h*}(hj5dhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjcubjO)}(hpwqh]hpwq}(hjBdhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjcubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjcubah}(h]h ]h"]h$]h&]jjuh1jhjchhhjchMeubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjchhhjchMeubah}(h]jcah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjchMehjchhubj{)}(hhh]h)}(hput a pool_workqueue referenceh]hput a pool_workqueue reference}(hjldhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM_hjidhhubah}(h]h ]h"]h$]h&]uh1jzhjchhhjchMeubeh}(h]h ](jfunctioneh"]h$]h&]jjjjdjjdjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue to put **Description** Drop a reference of **pwq**. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.h](h)}(h**Parameters**h]j)}(hjdh]h Parameters}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMchjdubjS)}(hhh]jX)}(h5``struct pool_workqueue *pwq`` pool_workqueue to put h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjdh]hstruct pool_workqueue *pwq}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM`hjdubjw)}(hhh]h)}(hpool_workqueue to puth]hpool_workqueue to put}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjdhM`hjdubah}(h]h ]h"]h$]h&]uh1jvhjdubeh}(h]h ]h"]h$]h&]uh1jWhjdhM`hjdubah}(h]h ]h"]h$]h&]uh1jRhjdubh)}(h**Description**h]j)}(hjdh]h Description}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMbhjdubh)}(hDrop a reference of **pwq**. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.h](hDrop a reference of }(hjdhhhNhNubj)}(h**pwq**h]hpwq}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubho. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.}(hjdhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMbhjdubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jput_pwq_unlocked (C function)c.put_pwq_unlockedhNtauh1jhjhhhNhNubj)}(hhh](j)}(h2void put_pwq_unlocked (struct pool_workqueue *pwq)h]j )}(h1void put_pwq_unlocked(struct pool_workqueue *pwq)h](j)}(hvoidh]hvoid}(hj?ehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj;ehhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMwubj8)}(h h]h }(hjNehhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj;ehhhjMehMwubjI)}(hput_pwq_unlockedh]jO)}(hput_pwq_unlockedh]hput_pwq_unlocked}(hj`ehhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj\eubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj;ehhhjMehMwubj)}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j&)}(hj)h]hstruct}(hj|ehhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjxeubj8)}(h h]h }(hjehhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjxeubh)}(hhh]jO)}(hpool_workqueueh]hpool_workqueue}(hjehhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjeubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjemodnameN classnameNjojr)}ju]jx)}jkjbesbc.put_pwq_unlockedasbuh1hhjxeubj8)}(h h]h }(hjehhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjxeubj)}(hjah]h*}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxeubjO)}(hpwqh]hpwq}(hjehhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjxeubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjteubah}(h]h ]h"]h$]h&]jjuh1jhj;ehhhjMehMwubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj7ehhhjMehMwubah}(h]j2eah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjMehMwhj4ehhubj{)}(hhh]h)}(h+put_pwq() with surrounding pool lock/unlockh]h+put_pwq() with surrounding pool lock/unlock}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMrhjehhubah}(h]h ]h"]h$]h&]uh1jzhj4ehhhjMehMwubeh}(h]h ](jfunctioneh"]h$]h&]jjjjfjjfjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue to put (can be ``NULL``) **Description** put_pwq() with locking. This function also allows ``NULL`` **pwq**.h](h)}(h**Parameters**h]j)}(hj!fh]h Parameters}(hj#fhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMvhjfubjS)}(hhh]jX)}(hG``struct pool_workqueue *pwq`` pool_workqueue to put (can be ``NULL``) h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hj@fh]hstruct pool_workqueue *pwq}(hjBfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>fubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMshj:fubjw)}(hhh]h)}(h'pool_workqueue to put (can be ``NULL``)h](hpool_workqueue to put (can be }(hjYfhhhNhNubj)}(h``NULL``h]hNULL}(hjafhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYfubh)}(hjYfhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjUfhMshjVfubah}(h]h ]h"]h$]h&]uh1jvhj:fubeh}(h]h ]h"]h$]h&]uh1jWhjUfhMshj7fubah}(h]h ]h"]h$]h&]uh1jRhjfubh)}(h**Description**h]j)}(hjfh]h Description}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMuhjfubh)}(hDput_pwq() with locking. This function also allows ``NULL`` **pwq**.h](h3put_pwq() with locking. This function also allows }(hjfhhhNhNubj)}(h``NULL``h]hNULL}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubh }(hjfhhhNhNubj)}(h**pwq**h]hpwq}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubh.}(hjfhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMuhjfubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!pwq_tryinc_nr_active (C function)c.pwq_tryinc_nr_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(hAbool pwq_tryinc_nr_active (struct pool_workqueue *pwq, bool fill)h]j )}(h@bool pwq_tryinc_nr_active(struct pool_workqueue *pwq, bool fill)h](j)}(hj7&h]hbool}(hjfhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjfhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjghhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjfhhhjghMubjI)}(hpwq_tryinc_nr_activeh]jO)}(hpwq_tryinc_nr_activeh]hpwq_tryinc_nr_active}(hjghhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjgubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjfhhhjghMubj)}(h'(struct pool_workqueue *pwq, bool fill)h](j)}(hstruct pool_workqueue *pwqh](j&)}(hj)h]hstruct}(hj2ghhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj.gubj8)}(h h]h }(hj?ghhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj.gubh)}(hhh]jO)}(hpool_workqueueh]hpool_workqueue}(hjPghhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjMgubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjRgmodnameN classnameNjojr)}ju]jx)}jkjgsbc.pwq_tryinc_nr_activeasbuh1hhj.gubj8)}(h h]h }(hjpghhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj.gubj)}(hjah]h*}(hj~ghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.gubjO)}(hpwqh]hpwq}(hjghhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj.gubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj*gubj)}(h bool fillh](j)}(hj7&h]hbool}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubj8)}(h h]h }(hjghhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjgubjO)}(hfillh]hfill}(hjghhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjgubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj*gubeh}(h]h ]h"]h$]h&]jjuh1jhjfhhhjghMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjfhhhjghMubah}(h]jfah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjghMhjfhhubj{)}(hhh]h)}(h$Try to increment nr_active for a pwqh]h$Try to increment nr_active for a pwq}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjghhubah}(h]h ]h"]h$]h&]uh1jzhjfhhhjghMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjhjjhjjjuh1jhhhjhNhNubj)}(hX-**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue of interest ``bool fill`` max_active may have increased, try to increase concurrency level **Description** Try to increment nr_active for **pwq**. Returns ``true`` if an nr_active count is successfully obtained. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hj hh]h Parameters}(hj hhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj hubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhubjS)}(hhh](jX)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hj*hh]hstruct pool_workqueue *pwq}(hj,hhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj(hubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj$hubjw)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjChhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj?hhMhj@hubah}(h]h ]h"]h$]h&]uh1jvhj$hubeh}(h]h ]h"]h$]h&]uh1jWhj?hhMhj!hubjX)}(hO``bool fill`` max_active may have increased, try to increase concurrency level h](j^)}(h ``bool fill``h]j)}(hjchh]h bool fill}(hjehhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjahubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj]hubjw)}(hhh]h)}(h@max_active may have increased, try to increase concurrency levelh]h@max_active may have increased, try to increase concurrency level}(hj|hhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjxhhMhjyhubah}(h]h ]h"]h$]h&]uh1jvhj]hubeh}(h]h ]h"]h$]h&]uh1jWhjxhhMhj!hubeh}(h]h ]h"]h$]h&]uh1jRhjhubh)}(h**Description**h]j)}(hjhh]h Description}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhubh)}(h}Try to increment nr_active for **pwq**. Returns ``true`` if an nr_active count is successfully obtained. ``false`` otherwise.h](hTry to increment nr_active for }(hjhhhhNhNubj)}(h**pwq**h]hpwq}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubh . Returns }(hjhhhhNhNubj)}(h``true``h]htrue}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubh1 if an nr_active count is successfully obtained. }(hjhhhhNhNubj)}(h ``false``h]hfalse}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubh otherwise.}(hjhhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j(pwq_activate_first_inactive (C function)c.pwq_activate_first_inactivehNtauh1jhjhhhNhNubj)}(hhh](j)}(hHbool pwq_activate_first_inactive (struct pool_workqueue *pwq, bool fill)h]j )}(hGbool pwq_activate_first_inactive(struct pool_workqueue *pwq, bool fill)h](j)}(hj7&h]hbool}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjihhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj'ihhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjihhhj&ihMubjI)}(hpwq_activate_first_inactiveh]jO)}(hpwq_activate_first_inactiveh]hpwq_activate_first_inactive}(hj9ihhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj5iubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjihhhj&ihMubj)}(h'(struct pool_workqueue *pwq, bool fill)h](j)}(hstruct pool_workqueue *pwqh](j&)}(hj)h]hstruct}(hjUihhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjQiubj8)}(h h]h }(hjbihhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjQiubh)}(hhh]jO)}(hpool_workqueueh]hpool_workqueue}(hjsihhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjpiubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjuimodnameN classnameNjojr)}ju]jx)}jkj;isbc.pwq_activate_first_inactiveasbuh1hhjQiubj8)}(h h]h }(hjihhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjQiubj)}(hjah]h*}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQiubjO)}(hpwqh]hpwq}(hjihhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjQiubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjMiubj)}(h bool fillh](j)}(hj7&h]hbool}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubj8)}(h h]h }(hjihhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjiubjO)}(hfillh]hfill}(hjihhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjiubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjMiubeh}(h]h ]h"]h$]h&]jjuh1jhjihhhj&ihMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjihhhj&ihMubah}(h]j iah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj&ihMhjihhubj{)}(hhh]h)}(h.Activate the first inactive work item on a pwqh]h.Activate the first inactive work item on a pwq}(hj jhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj jhhubah}(h]h ]h"]h$]h&]uh1jzhjihhhj&ihMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj$jjj$jjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue of interest ``bool fill`` max_active may have increased, try to increase concurrency level **Description** Activate the first inactive work item of **pwq** if available and allowed by max_active limit. Returns ``true`` if an inactive work item has been activated. ``false`` if no inactive work item is found or max_active limit is reached.h](h)}(h**Parameters**h]j)}(hj.jh]h Parameters}(hj0jhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,jubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj(jubjS)}(hhh](jX)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjMjh]hstruct pool_workqueue *pwq}(hjOjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjGjubjw)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjfjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjbjhMhjcjubah}(h]h ]h"]h$]h&]uh1jvhjGjubeh}(h]h ]h"]h$]h&]uh1jWhjbjhMhjDjubjX)}(hO``bool fill`` max_active may have increased, try to increase concurrency level h](j^)}(h ``bool fill``h]j)}(hjjh]h bool fill}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjjubjw)}(hhh]h)}(h@max_active may have increased, try to increase concurrency levelh]h@max_active may have increased, try to increase concurrency level}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjjhMhjjubah}(h]h ]h"]h$]h&]uh1jvhjjubeh}(h]h ]h"]h$]h&]uh1jWhjjhMhjDjubeh}(h]h ]h"]h$]h&]uh1jRhj(jubh)}(h**Description**h]j)}(hjjh]h Description}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj(jubh)}(h^Activate the first inactive work item of **pwq** if available and allowed by max_active limit.h](h)Activate the first inactive work item of }(hjjhhhNhNubj)}(h**pwq**h]hpwq}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubh. if available and allowed by max_active limit.}(hjjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj(jubh)}(hReturns ``true`` if an inactive work item has been activated. ``false`` if no inactive work item is found or max_active limit is reached.h](hReturns }(hjjhhhNhNubj)}(h``true``h]htrue}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubh. if an inactive work item has been activated. }(hjjhhhNhNubj)}(h ``false``h]hfalse}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubhB if no inactive work item is found or max_active limit is reached.}(hjjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj(jubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](junplug_oldest_pwq (C function)c.unplug_oldest_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void unplug_oldest_pwq (struct workqueue_struct *wq)h]j )}(h3void unplug_oldest_pwq(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjKkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGkhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjZkhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjGkhhhjYkhM ubjI)}(hunplug_oldest_pwqh]jO)}(hunplug_oldest_pwqh]hunplug_oldest_pwq}(hjlkhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjhkubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjGkhhhjYkhM ubj)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjkhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjkubj8)}(h h]h }(hjkhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjkubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjkhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjkubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjkmodnameN classnameNjojr)}ju]jx)}jkjnksbc.unplug_oldest_pwqasbuh1hhjkubj8)}(h h]h }(hjkhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjkubj)}(hjah]h*}(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubjO)}(hwqh]hwq}(hjkhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjkubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjkubah}(h]h ]h"]h$]h&]jjuh1jhjGkhhhjYkhM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjCkhhhjYkhM ubah}(h]j>kah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjYkhM hj@khhubj{)}(hhh]h)}(h unplug the oldest pool_workqueueh]h unplug the oldest pool_workqueue}(hj lhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjlhhubah}(h]h ]h"]h$]h&]uh1jzhj@khhhjYkhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj#ljj#ljjjuh1jhhhjhNhNubj)}(hX!**Parameters** ``struct workqueue_struct *wq`` workqueue_struct where its oldest pwq is to be unplugged **Description** This function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering:: dfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6 When the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq's are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.h](h)}(h**Parameters**h]j)}(hj-lh]h Parameters}(hj/lhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj+lubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj'lubjS)}(hhh]jX)}(hY``struct workqueue_struct *wq`` workqueue_struct where its oldest pwq is to be unplugged h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjLlh]hstruct workqueue_struct *wq}(hjNlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJlubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjFlubjw)}(hhh]h)}(h8workqueue_struct where its oldest pwq is to be unpluggedh]h8workqueue_struct where its oldest pwq is to be unplugged}(hjelhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjalhM hjblubah}(h]h ]h"]h$]h&]uh1jvhjFlubeh}(h]h ]h"]h$]h&]uh1jWhjalhM hjClubah}(h]h ]h"]h$]h&]uh1jRhj'lubh)}(h**Description**h]j)}(hjlh]h Description}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj'lubh)}(hThis function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering::h]hThis function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering:}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj'lubjr)}(hdfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6h]hdfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6}hjlsbah}(h]h ]h"]h$]h&]jjuh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj'lubh)}(hWhen the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq's are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.h]hWhen the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq’s are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj'lubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&node_activate_pending_pwq (C function)c.node_activate_pending_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h_void node_activate_pending_pwq (struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h]j )}(h^void node_activate_pending_pwq(struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h](j)}(hvoidh]hvoid}(hjlhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjlhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM:ubj8)}(h h]h }(hjlhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjlhhhjlhM:ubjI)}(hnode_activate_pending_pwqh]jO)}(hnode_activate_pending_pwqh]hnode_activate_pending_pwq}(hj mhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjmubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjlhhhjlhM:ubj)}(h@(struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h](j)}(hstruct wq_node_nr_active *nnah](j&)}(hj)h]hstruct}(hj'mhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj#mubj8)}(h h]h }(hj4mhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj#mubh)}(hhh]jO)}(hwq_node_nr_activeh]hwq_node_nr_active}(hjEmhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjBmubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjGmmodnameN classnameNjojr)}ju]jx)}jkj msbc.node_activate_pending_pwqasbuh1hhj#mubj8)}(h h]h }(hjemhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj#mubj)}(hjah]h*}(hjsmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj#mubjO)}(hnnah]hnna}(hjmhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj#mubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjmubj)}(hstruct worker_pool *caller_poolh](j&)}(hj)h]hstruct}(hjmhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjmubj8)}(h h]h }(hjmhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjmubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hjmhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjmubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmmodnameN classnameNjojr)}ju]jamc.node_activate_pending_pwqasbuh1hhjmubj8)}(h h]h }(hjmhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjmubj)}(hjah]h*}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubjO)}(h caller_poolh]h caller_pool}(hjmhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjmubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjmubeh}(h]h ]h"]h$]h&]jjuh1jhjlhhhjlhM:ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjlhhhjlhM:ubah}(h]jlah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjlhM:hjlhhubj{)}(hhh]h)}(h-Activate a pending pwq on a wq_node_nr_activeh]h-Activate a pending pwq on a wq_node_nr_active}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM3hjnhhubah}(h]h ]h"]h$]h&]uh1jzhjlhhhjlhM:ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj2njj2njjjuh1jhhhjhNhNubj)}(hXT**Parameters** ``struct wq_node_nr_active *nna`` wq_node_nr_active to activate a pending pwq for ``struct worker_pool *caller_pool`` worker_pool the caller is locking **Description** Activate a pwq in **nna->pending_pwqs**. Called with **caller_pool** locked. **caller_pool** may be unlocked and relocked to lock other worker_pools.h](h)}(h**Parameters**h]j)}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:nubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM7hj6nubjS)}(hhh](jX)}(hR``struct wq_node_nr_active *nna`` wq_node_nr_active to activate a pending pwq for h](j^)}(h!``struct wq_node_nr_active *nna``h]j)}(hj[nh]hstruct wq_node_nr_active *nna}(hj]nhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYnubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM4hjUnubjw)}(hhh]h)}(h/wq_node_nr_active to activate a pending pwq forh]h/wq_node_nr_active to activate a pending pwq for}(hjtnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjpnhM4hjqnubah}(h]h ]h"]h$]h&]uh1jvhjUnubeh}(h]h ]h"]h$]h&]uh1jWhjpnhM4hjRnubjX)}(hF``struct worker_pool *caller_pool`` worker_pool the caller is locking h](j^)}(h#``struct worker_pool *caller_pool``h]j)}(hjnh]hstruct worker_pool *caller_pool}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM5hjnubjw)}(hhh]h)}(h!worker_pool the caller is lockingh]h!worker_pool the caller is locking}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjnhM5hjnubah}(h]h ]h"]h$]h&]uh1jvhjnubeh}(h]h ]h"]h$]h&]uh1jWhjnhM5hjRnubeh}(h]h ]h"]h$]h&]uh1jRhj6nubh)}(h**Description**h]j)}(hjnh]h Description}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM7hj6nubh)}(hActivate a pwq in **nna->pending_pwqs**. Called with **caller_pool** locked. **caller_pool** may be unlocked and relocked to lock other worker_pools.h](hActivate a pwq in }(hjnhhhNhNubj)}(h**nna->pending_pwqs**h]hnna->pending_pwqs}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubh. Called with }(hjnhhhNhNubj)}(h**caller_pool**h]h caller_pool}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubh locked. }(hjnhhhNhNubj)}(h**caller_pool**h]h caller_pool}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubh9 may be unlocked and relocked to lock other worker_pools.}(hjnhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM7hj6nubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jpwq_dec_nr_active (C function)c.pwq_dec_nr_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(h3void pwq_dec_nr_active (struct pool_workqueue *pwq)h]j )}(h2void pwq_dec_nr_active(struct pool_workqueue *pwq)h](j)}(hvoidh]hvoid}(hjJohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFohhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjYohhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjFohhhjXohMubjI)}(hpwq_dec_nr_activeh]jO)}(hpwq_dec_nr_activeh]hpwq_dec_nr_active}(hjkohhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjgoubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjFohhhjXohMubj)}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j&)}(hj)h]hstruct}(hjohhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjoubj8)}(h h]h }(hjohhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjoubh)}(hhh]jO)}(hpool_workqueueh]hpool_workqueue}(hjohhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjoubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjomodnameN classnameNjojr)}ju]jx)}jkjmosbc.pwq_dec_nr_activeasbuh1hhjoubj8)}(h h]h }(hjohhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjoubj)}(hjah]h*}(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoubjO)}(hpwqh]hpwq}(hjohhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjoubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjoubah}(h]h ]h"]h$]h&]jjuh1jhjFohhhjXohMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjBohhhjXohMubah}(h]j=oah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjXohMhj?ohhubj{)}(hhh]h)}(hRetire an active counth]hRetire an active count}(hj phhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjphhubah}(h]h ]h"]h$]h&]uh1jzhj?ohhhjXohMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj"pjj"pjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue of interest **Description** Decrement **pwq**'s nr_active and try to activate the first inactive work item. For unbound workqueues, this function may temporarily drop **pwq->pool->lock**.h](h)}(h**Parameters**h]j)}(hj,ph]h Parameters}(hj.phhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*pubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj&pubjS)}(hhh]jX)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjKph]hstruct pool_workqueue *pwq}(hjMphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIpubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjEpubjw)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjdphhhNhNubah}(h]h ]h"]h$]h&]uh1hhj`phMhjapubah}(h]h ]h"]h$]h&]uh1jvhjEpubeh}(h]h ]h"]h$]h&]uh1jWhj`phMhjBpubah}(h]h ]h"]h$]h&]uh1jRhj&pubh)}(h**Description**h]j)}(hjph]h Description}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj&pubh)}(hDecrement **pwq**'s nr_active and try to activate the first inactive work item. For unbound workqueues, this function may temporarily drop **pwq->pool->lock**.h](h Decrement }(hjphhhNhNubj)}(h**pwq**h]hpwq}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh|’s nr_active and try to activate the first inactive work item. For unbound workqueues, this function may temporarily drop }(hjphhhNhNubj)}(h**pwq->pool->lock**h]hpwq->pool->lock}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh.}(hjphhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj&pubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!pwq_dec_nr_in_flight (C function)c.pwq_dec_nr_in_flighthNtauh1jhjhhhNhNubj)}(hhh](j)}(hOvoid pwq_dec_nr_in_flight (struct pool_workqueue *pwq, unsigned long work_data)h]j )}(hNvoid pwq_dec_nr_in_flight(struct pool_workqueue *pwq, unsigned long work_data)h](j)}(hvoidh]hvoid}(hjphhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjphhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjphhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjphhhjphMubjI)}(hpwq_dec_nr_in_flighth]jO)}(hpwq_dec_nr_in_flighth]hpwq_dec_nr_in_flight}(hjqhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj qubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjphhhjphMubj)}(h5(struct pool_workqueue *pwq, unsigned long work_data)h](j)}(hstruct pool_workqueue *pwqh](j&)}(hj)h]hstruct}(hj,qhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj(qubj8)}(h h]h }(hj9qhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj(qubh)}(hhh]jO)}(hpool_workqueueh]hpool_workqueue}(hjJqhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjGqubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjLqmodnameN classnameNjojr)}ju]jx)}jkjqsbc.pwq_dec_nr_in_flightasbuh1hhj(qubj8)}(h h]h }(hjjqhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj(qubj)}(hjah]h*}(hjxqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(qubjO)}(hpwqh]hpwq}(hjqhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj(qubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj$qubj)}(hunsigned long work_datah](j)}(hunsignedh]hunsigned}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubj8)}(h h]h }(hjqhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjqubj)}(hlongh]hlong}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubj8)}(h h]h }(hjqhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjqubjO)}(h work_datah]h work_data}(hjqhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjqubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj$qubeh}(h]h ]h"]h$]h&]jjuh1jhjphhhjphMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjphhhjphMubah}(h]jpah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjphMhjphhubj{)}(hhh]h)}(hdecrement pwq's nr_in_flighth]hdecrement pwq’s nr_in_flight}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjqhhubah}(h]h ]h"]h$]h&]uh1jzhjphhhjphMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjrjjrjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct pool_workqueue *pwq`` pwq of interest ``unsigned long work_data`` work_data of work which left the queue **Description** A work either has completed or is removed from pending queue, decrement nr_in_flight of its pwq and handle workqueue flushing. **NOTE** For unbound workqueues, this function may temporarily drop **pwq->pool->lock** and thus should be called after all other state updates for the in-flight work item is complete. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hj"rh]h Parameters}(hj$rhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj rubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjrubjS)}(hhh](jX)}(h/``struct pool_workqueue *pwq`` pwq of interest h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjArh]hstruct pool_workqueue *pwq}(hjCrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?rubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj;rubjw)}(hhh]h)}(hpwq of interesth]hpwq of interest}(hjZrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjVrhMhjWrubah}(h]h ]h"]h$]h&]uh1jvhj;rubeh}(h]h ]h"]h$]h&]uh1jWhjVrhMhj8rubjX)}(hC``unsigned long work_data`` work_data of work which left the queue h](j^)}(h``unsigned long work_data``h]j)}(hjzrh]hunsigned long work_data}(hj|rhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxrubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtrubjw)}(hhh]h)}(h&work_data of work which left the queueh]h&work_data of work which left the queue}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjrhMhjrubah}(h]h ]h"]h$]h&]uh1jvhjtrubeh}(h]h ]h"]h$]h&]uh1jWhjrhMhj8rubeh}(h]h ]h"]h$]h&]uh1jRhjrubh)}(h**Description**h]j)}(hjrh]h Description}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjrubh)}(h~A work either has completed or is removed from pending queue, decrement nr_in_flight of its pwq and handle workqueue flushing.h]h~A work either has completed or is removed from pending queue, decrement nr_in_flight of its pwq and handle workqueue flushing.}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjrubh)}(h**NOTE**h]j)}(hjrh]hNOTE}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjrubh)}(hFor unbound workqueues, this function may temporarily drop **pwq->pool->lock** and thus should be called after all other state updates for the in-flight work item is complete.h](h;For unbound workqueues, this function may temporarily drop }(hjrhhhNhNubj)}(h**pwq->pool->lock**h]hpwq->pool->lock}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubha and thus should be called after all other state updates for the in-flight work item is complete.}(hjrhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjrubh)}(h **Context**h]j)}(hjsh]hContext}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjrubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hj+shhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjrubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j try_to_grab_pending (C function)c.try_to_grab_pendinghNtauh1jhjhhhNhNubj)}(hhh](j)}(hXint try_to_grab_pending (struct work_struct *work, u32 cflags, unsigned long *irq_flags)h]j )}(hWint try_to_grab_pending(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hinth]hint}(hjZshhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjVshhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjishhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjVshhhjhshMubjI)}(htry_to_grab_pendingh]jO)}(htry_to_grab_pendingh]htry_to_grab_pending}(hj{shhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjwsubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjVshhhjhshMubj)}(h@(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjshhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjsubj8)}(h h]h }(hjshhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjsubh)}(hhh]jO)}(h work_structh]h work_struct}(hjshhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjsubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjsmodnameN classnameNjojr)}ju]jx)}jkj}ssbc.try_to_grab_pendingasbuh1hhjsubj8)}(h h]h }(hjshhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjsubj)}(hjah]h*}(hjshhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsubjO)}(hworkh]hwork}(hjshhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjsubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjsubj)}(h u32 cflagsh](h)}(hhh]jO)}(hu32h]hu32}(hj thhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj tubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjtmodnameN classnameNjojr)}ju]jsc.try_to_grab_pendingasbuh1hhjtubj8)}(h h]h }(hj*thhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjtubjO)}(hcflagsh]hcflags}(hj8thhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjtubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjsubj)}(hunsigned long *irq_flagsh](j)}(hunsignedh]hunsigned}(hjQthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMtubj8)}(h h]h }(hj_thhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjMtubj)}(hlongh]hlong}(hjmthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMtubj8)}(h h]h }(hj{thhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjMtubj)}(hjah]h*}(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMtubjO)}(h irq_flagsh]h irq_flags}(hjthhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjMtubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjsubeh}(h]h ]h"]h$]h&]jjuh1jhjVshhhjhshMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjRshhhjhshMubah}(h]jMsah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhshMhjOshhubj{)}(hhh]h)}(h-steal work item from worklist and disable irqh]h-steal work item from worklist and disable irq}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjthhubah}(h]h ]h"]h$]h&]uh1jzhjOshhhjhshMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjtjjtjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work item to steal ``u32 cflags`` ``WORK_CANCEL_`` flags ``unsigned long *irq_flags`` place to store irq state **Description** Try to grab PENDING bit of **work**. This function can handle **work** in any stable state - idle, on timer or on worklist. On successful return, >= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(***irq_flags**). This function is safe to call from any context including IRQ handler. **Return** ======== ================================================================ 1 if **work** was pending and we successfully stole PENDING 0 if **work** was idle and we claimed PENDING -EAGAIN if PENDING couldn't be grabbed at the moment, safe to busy-retry ======== ================================================================ **Note** On >= 0 return, the caller owns **work**'s PENDING bit. To avoid getting interrupted while holding PENDING and **work** off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time.h](h)}(h**Parameters**h]j)}(hjth]h Parameters}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtubjS)}(hhh](jX)}(h0``struct work_struct *work`` work item to steal h](j^)}(h``struct work_struct *work``h]j)}(hjuh]hstruct work_struct *work}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtubjw)}(hhh]h)}(hwork item to stealh]hwork item to steal}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuhMhjuubah}(h]h ]h"]h$]h&]uh1jvhjtubeh}(h]h ]h"]h$]h&]uh1jWhjuhMhjtubjX)}(h&``u32 cflags`` ``WORK_CANCEL_`` flags h](j^)}(h``u32 cflags``h]j)}(hj:uh]h u32 cflags}(hj= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(***irq_flags**).h](hsOn successful return, >= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(}(hjvhhhNhNubj)}(h***irq_flags**h]h *irq_flags}(hj vhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubh).}(hjvhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtubh)}(hEThis function is safe to call from any context including IRQ handler.h]hEThis function is safe to call from any context including IRQ handler.}(hj&vhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtubh)}(h **Return**h]j)}(hj7vh]hReturn}(hj9vhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj5vubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtubjJ)}(hXe======== ================================================================ 1 if **work** was pending and we successfully stole PENDING 0 if **work** was idle and we claimed PENDING -EAGAIN if PENDING couldn't be grabbed at the moment, safe to busy-retry ======== ================================================================ h]j )}(hhh]j )}(hhh](j )}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1j hjTvubj )}(hhh]h}(h]h ]h"]h$]h&]colwidthK@uh1j hjTvubj* )}(hhh](j )}(hhh](j )}(hhh]h)}(h1h]h1}(hjtvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjqvubah}(h]h ]h"]h$]h&]uh1j hjnvubj )}(hhh]h)}(h9if **work** was pending and we successfully stole PENDINGh](hif }(hjvhhhNhNubj)}(h**work**h]hwork}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubh. was pending and we successfully stole PENDING}(hjvhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjvhMhjvubah}(h]h ]h"]h$]h&]uh1j hjnvubeh}(h]h ]h"]h$]h&]uh1j hjkvubj )}(hhh](j )}(hhh]h)}(h0h]h0}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjvubah}(h]h ]h"]h$]h&]uh1j hjvubj )}(hhh]h)}(h+if **work** was idle and we claimed PENDINGh](hif }(hjvhhhNhNubj)}(h**work**h]hwork}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubh was idle and we claimed PENDING}(hjvhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjvhMhjvubah}(h]h ]h"]h$]h&]uh1j hjvubeh}(h]h ]h"]h$]h&]uh1j hjkvubj )}(hhh](j )}(hhh]h)}(h-EAGAINh]h-EAGAIN}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjwubah}(h]h ]h"]h$]h&]uh1j hjwubj )}(hhh]h)}(h@if PENDING couldn't be grabbed at the moment, safe to busy-retryh]hBif PENDING couldn’t be grabbed at the moment, safe to busy-retry}(hj whhhNhNubah}(h]h ]h"]h$]h&]uh1hhjwhMhjwubah}(h]h ]h"]h$]h&]uh1j hjwubeh}(h]h ]h"]h$]h&]uh1j hjkvubeh}(h]h ]h"]h$]h&]uh1j) hjTvubeh}(h]h ]h"]h$]h&]colsKuh1j hjQvubah}(h]h ]h"]h$]h&]uh1j hjMvubah}(h]h ]h"]h$]h&]uh1jIhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtubh)}(h**Note**h]j)}(hjVwh]hNote}(hjXwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTwubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtubh)}(hXOn >= 0 return, the caller owns **work**'s PENDING bit. To avoid getting interrupted while holding PENDING and **work** off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time.h](h On >= 0 return, the caller owns }(hjlwhhhNhNubj)}(h**work**h]hwork}(hjtwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlwubhJ’s PENDING bit. To avoid getting interrupted while holding PENDING and }(hjlwhhhNhNubj)}(h**work**h]hwork}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlwubh off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time.}(hjlwhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjtubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwork_grab_pending (C function)c.work_grab_pendinghNtauh1jhjhhhNhNubj)}(hhh](j)}(hWbool work_grab_pending (struct work_struct *work, u32 cflags, unsigned long *irq_flags)h]j )}(hVbool work_grab_pending(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hj7&h]hbool}(hjwhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjwhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMgubj8)}(h h]h }(hjwhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjwhhhjwhMgubjI)}(hwork_grab_pendingh]jO)}(hwork_grab_pendingh]hwork_grab_pending}(hjwhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjwubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjwhhhjwhMgubj)}(h@(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjwhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjwubj8)}(h h]h }(hjxhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjwubh)}(hhh]jO)}(h work_structh]h work_struct}(hjxhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjxubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjxmodnameN classnameNjojr)}ju]jx)}jkjwsbc.work_grab_pendingasbuh1hhjwubj8)}(h h]h }(hj9xhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjwubj)}(hjah]h*}(hjGxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjwubjO)}(hworkh]hwork}(hjTxhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjwubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjwubj)}(h u32 cflagsh](h)}(hhh]jO)}(hu32h]hu32}(hjpxhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjmxubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjrxmodnameN classnameNjojr)}ju]j5xc.work_grab_pendingasbuh1hhjixubj8)}(h h]h }(hjxhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjixubjO)}(hcflagsh]hcflags}(hjxhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjixubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjwubj)}(hunsigned long *irq_flagsh](j)}(hunsignedh]hunsigned}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubj8)}(h h]h }(hjxhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjxubj)}(hlongh]hlong}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubj8)}(h h]h }(hjxhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjxubj)}(hjah]h*}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubjO)}(h irq_flagsh]h irq_flags}(hjxhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjxubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjwubeh}(h]h ]h"]h$]h&]jjuh1jhjwhhhjwhMgubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjwhhhjwhMgubah}(h]jwah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjwhMghjwhhubj{)}(hhh]h)}(h-steal work item from worklist and disable irqh]h-steal work item from worklist and disable irq}(hj$yhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMYhj!yhhubah}(h]h ]h"]h$]h&]uh1jzhjwhhhjwhMgubeh}(h]h ](jfunctioneh"]h$]h&]jjjjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6zubh. }(hj6zhhhNhNubj)}(h**work**h]hwork}(hjPzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6zubh< can be in any stable state - idle, on timer or on worklist.}(hj6zhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM^hj@yubh)}(hCan be called from any context. IRQ is disabled on return with IRQ state stored in ***irq_flags**. The caller is responsible for re-enabling it using local_irq_restore().h](hSCan be called from any context. IRQ is disabled on return with IRQ state stored in }(hjizhhhNhNubj)}(h***irq_flags**h]h *irq_flags}(hjqzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjizubhI. The caller is responsible for re-enabling it using local_irq_restore().}(hjizhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMahj@yubh)}(hlock).h](h)}(h**Parameters**h]j)}(hj}h]h Parameters}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMyhj|ubjS)}(hhh](jX)}(h7``struct pool_workqueue *pwq`` pwq **work** belongs to h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hj!}h]hstruct pool_workqueue *pwq}(hj#}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMvhj}ubjw)}(hhh]h)}(hpwq **work** belongs toh](hpwq }(hj:}hhhNhNubj)}(h**work**h]hwork}(hjB}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:}ubh belongs to}(hj:}hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj6}hMvhj7}ubah}(h]h ]h"]h$]h&]uh1jvhj}ubeh}(h]h ]h"]h$]h&]uh1jWhj6}hMvhj}ubjX)}(h,``struct work_struct *work`` work to insert h](j^)}(h``struct work_struct *work``h]j)}(hjl}h]hstruct work_struct *work}(hjn}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjj}ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMwhjf}ubjw)}(hhh]h)}(hwork to inserth]hwork to insert}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hMwhj}ubah}(h]h ]h"]h$]h&]uh1jvhjf}ubeh}(h]h ]h"]h$]h&]uh1jWhj}hMwhj}ubjX)}(h+``struct list_head *head`` insertion point h](j^)}(h``struct list_head *head``h]j)}(hj}h]hstruct list_head *head}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMxhj}ubjw)}(hhh]h)}(hinsertion pointh]hinsertion point}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hMxhj}ubah}(h]h ]h"]h$]h&]uh1jvhj}ubeh}(h]h ]h"]h$]h&]uh1jWhj}hMxhj}ubjX)}(h>``unsigned int extra_flags`` extra WORK_STRUCT_* flags to set h](j^)}(h``unsigned int extra_flags``h]j)}(hj}h]hunsigned int extra_flags}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMyhj}ubjw)}(hhh]h)}(h extra WORK_STRUCT_* flags to seth]h extra WORK_STRUCT_* flags to set}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hMyhj}ubah}(h]h ]h"]h$]h&]uh1jvhj}ubeh}(h]h ]h"]h$]h&]uh1jWhj}hMyhj}ubeh}(h]h ]h"]h$]h&]uh1jRhj|ubh)}(h**Description**h]j)}(hj~h]h Description}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM{hj|ubh)}(hgInsert **work** which belongs to **pwq** after **head**. **extra_flags** is or'd to work_struct flags.h](hInsert }(hj/~hhhNhNubj)}(h**work**h]hwork}(hj7~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/~ubh which belongs to }(hj/~hhhNhNubj)}(h**pwq**h]hpwq}(hjI~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/~ubh after }(hj/~hhhNhNubj)}(h**head**h]hhead}(hj[~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/~ubh. }(hj/~hhhNhNubj)}(h**extra_flags**h]h extra_flags}(hjm~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/~ubh is or’d to work_struct flags.}(hj/~hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM{hj|ubh)}(h **Context**h]j)}(hj~h]hContext}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM~hj|ubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM~hj|ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jqueue_work_on (C function)c.queue_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(hSbool queue_work_on (int cpu, struct workqueue_struct *wq, struct work_struct *work)h]j )}(hRbool queue_work_on(int cpu, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hj7&h]hbool}(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMN ubj8)}(h h]h }(hj~hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj~hhhj~hMN ubjI)}(h queue_work_onh]jO)}(h queue_work_onh]h queue_work_on}(hj~hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj~ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj~hhhj~hMN ubj)}(h@(int cpu, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hint cpuh](j)}(hinth]hint}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(hcpuh]hcpu}(hj%hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj>hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj:ubj8)}(h h]h }(hjKhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj:ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj\hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjYubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj^modnameN classnameNjojr)}ju]jx)}jkj~sbc.queue_work_onasbuh1hhj:ubj8)}(h h]h }(hj|hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj:ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubjO)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj:ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jxc.queue_work_onasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhj~hhhj~hMN ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj~hhhj~hMN ubah}(h]j~ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj~hMN hj~hhubj{)}(hhh]h)}(hqueue work on specific cpuh]hqueue work on specific cpu}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMA hj.hhubah}(h]h ]h"]h$]h&]uh1jzhj~hhhj~hMN ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjIjjIjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct work_struct *work`` work to queue **Description** We queue the work to a specific CPU, the caller must ensure it can't go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise.h](h)}(h**Parameters**h]j)}(hjSh]h Parameters}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chME hjMubjS)}(hhh](jX)}(h*``int cpu`` CPU number to execute work on h](j^)}(h ``int cpu``h]j)}(hjrh]hint cpu}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMB hjlubjw)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMB hjubah}(h]h ]h"]h$]h&]uh1jvhjlubeh}(h]h ]h"]h$]h&]uh1jWhjhMB hjiubjX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMC hjubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjĀhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMC hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMC hjiubjX)}(h+``struct work_struct *work`` work to queue h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMD hjހubjw)}(hhh]h)}(h work to queueh]h work to queue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMD hjubah}(h]h ]h"]h$]h&]uh1jvhjހubeh}(h]h ]h"]h$]h&]uh1jWhjhMD hjiubeh}(h]h ]h"]h$]h&]uh1jRhjMubh)}(h**Description**h]j)}(hjh]h Description}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMF hjMubh)}(hXWe queue the work to a specific CPU, the caller must ensure it can't go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat.h]hXWe queue the work to a specific CPU, the caller must ensure it can’t go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat.}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMF hjMubh)}(h **Return**h]j)}(hjFh]hReturn}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chML hjMubh)}(hA``false`` if **work** was already on a queue, ``true`` otherwise.h](j)}(h ``false``h]hfalse}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubh if }(hj\hhhNhNubj)}(h**work**h]hwork}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubh was already on a queue, }(hj\hhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubh otherwise.}(hj\hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chML hjMubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!select_numa_node_cpu (C function)c.select_numa_node_cpuhNtauh1jhjhhhNhNubj)}(hhh](j)}(h#int select_numa_node_cpu (int node)h]j )}(h"int select_numa_node_cpu(int node)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMj ubj8)}(h h]h }(hj́hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjˁhMj ubjI)}(hselect_numa_node_cpuh]jO)}(hselect_numa_node_cpuh]hselect_numa_node_cpu}(hjށhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjځubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjˁhMj ubj)}(h (int node)h]j)}(hint nodeh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(hnodeh]hnode}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjˁhMj ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjˁhMj ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjˁhMj hjhhubj{)}(hhh]h)}(hSelect a CPU based on NUMA nodeh]hSelect a CPU based on NUMA node}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMb hj=hhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjˁhMj ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjXjjXjjjuh1jhhhjhNhNubj)}(hX\**Parameters** ``int node`` NUMA node ID that we want to select a CPU from **Description** This function will attempt to find a "random" cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.h](h)}(h**Parameters**h]j)}(hjbh]h Parameters}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMf hj\ubjS)}(hhh]jX)}(h<``int node`` NUMA node ID that we want to select a CPU from h](j^)}(h ``int node``h]j)}(hjh]hint node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMc hj{ubjw)}(hhh]h)}(h.NUMA node ID that we want to select a CPU fromh]h.NUMA node ID that we want to select a CPU from}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMc hjubah}(h]h ]h"]h$]h&]uh1jvhj{ubeh}(h]h ]h"]h$]h&]uh1jWhjhMc hjxubah}(h]h ]h"]h$]h&]uh1jRhj\ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMe hj\ubh)}(hThis function will attempt to find a "random" cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.h]hXThis function will attempt to find a “random” cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.}(hj҂hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMe hj\ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jqueue_work_node (C function)c.queue_work_nodehNtauh1jhjhhhNhNubj)}(hhh](j)}(hVbool queue_work_node (int node, struct workqueue_struct *wq, struct work_struct *work)h]j )}(hUbool queue_work_node(int node, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhM ubjI)}(hqueue_work_nodeh]jO)}(hqueue_work_nodeh]hqueue_work_node}(hj!hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhM ubj)}(hA(int node, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hint nodeh](j)}(hinth]hint}(hj=hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj9ubj8)}(h h]h }(hjKhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj9ubjO)}(hnodeh]hnode}(hjYhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj9ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj5ubj)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjrhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjnubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjnubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkj#sbc.queue_work_nodeasbuh1hhjnubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjnubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjnubjO)}(hwqh]hwq}(hj˃hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjnubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj5ubj)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jc.queue_work_nodeasbuh1hhjubj8)}(h h]h }(hj hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj.hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkh]hwork}(hj;hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj5ubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhM ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM hjhhubj{)}(hhh]h)}(h2queue work on a "random" cpu for a given NUMA nodeh]h6queue work on a “random” cpu for a given NUMA node}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjbhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj}jj}jjjuh1jhhhjhNhNubj)}(hXH**Parameters** ``int node`` NUMA node that we are targeting the work for ``struct workqueue_struct *wq`` workqueue to use ``struct work_struct *work`` work to queue **Description** We queue the work to a "random" CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node. This function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior. Currently the "random" CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjS)}(hhh](jX)}(h:``int node`` NUMA node that we are targeting the work for h](j^)}(h ``int node``h]j)}(hjh]hint node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h,NUMA node that we are targeting the work forh]h,NUMA node that we are targeting the work for}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubjX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj߄h]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj݄ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjلubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjلubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubjX)}(h+``struct work_struct *work`` work to queue h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h work to queueh]h work to queue}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj-hM hj.ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj-hM hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjSh]h Description}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hWe queue the work to a "random" CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node.h]hWe queue the work to a “random” CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node.}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hThis function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior.h]hThis function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior.}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hCurrently the "random" CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU.h]hCurrently the “random” CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hA``false`` if **work** was already on a queue, ``true`` otherwise.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjąhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was already on a queue, }(hjhhhNhNubj)}(h``true``h]htrue}(hjօhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"queue_delayed_work_on (C function)c.queue_delayed_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(hrbool queue_delayed_work_on (int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]j )}(hqbool queue_delayed_work_on(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj hhhjhM ubjI)}(hqueue_delayed_work_onh]jO)}(hqueue_delayed_work_onh]hqueue_delayed_work_on}(hj/hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj+ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj hhhjhM ubj)}(hW(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hint cpuh](j)}(hinth]hint}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubj8)}(h h]h }(hjYhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjGubjO)}(hcpuh]hcpu}(hjghhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjGubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjCubj)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj|ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj|ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkj1sbc.queue_delayed_work_onasbuh1hhj|ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj|ubj)}(hjah]h*}(hj̆hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|ubjO)}(hwqh]hwq}(hjنhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj|ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjCubj)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jc.queue_delayed_work_onasbuh1hhjubj8)}(h h]h }(hj.hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hdworkh]hdwork}(hjIhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjCubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^ubj8)}(h h]h }(hjphhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj^ubj)}(hlongh]hlong}(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj^ubjO)}(hdelayh]hdelay}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj^ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjCubeh}(h]h ]h"]h$]h&]jjuh1jhj hhhjhM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhM ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM hjhhubj{)}(hhh]h)}(h&queue work on specific CPU after delayh]h&queue work on specific CPU after delay}(hjćhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj܇jj܇jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** We queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can't go away. Callers that fail to ensure this, may get **dwork->timer** queued to an offlined CPU and this will prevent queueing of **dwork->work** unless the offlined CPU becomes online again. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise. If **delay** is zero and **dwork** is idle, it will be scheduled for immediate execution.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjS)}(hhh](jX)}(h*``int cpu`` CPU number to execute work on h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubjX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj>h]hstruct workqueue_struct *wq}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj8ubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjShM hjTubah}(h]h ]h"]h$]h&]uh1jvhj8ubeh}(h]h ]h"]h$]h&]uh1jWhjShM hjubjX)}(h-``struct delayed_work *dwork`` work to queue h](j^)}(h``struct delayed_work *dwork``h]j)}(hjwh]hstruct delayed_work *dwork}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjuubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjqubjw)}(hhh]h)}(h work to queueh]h work to queue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjqubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubjX)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](j^)}(h``unsigned long delay``h]j)}(hjh]hunsigned long delay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hjɈhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjňhM hjƈubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjňhM hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hX,We queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can't go away. Callers that fail to ensure this, may get **dwork->timer** queued to an offlined CPU and this will prevent queueing of **dwork->work** unless the offlined CPU becomes online again.h](hWe queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can’t go away. Callers that fail to ensure this, may get }(hjhhhNhNubj)}(h**dwork->timer**h]h dwork->timer}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh= queued to an offlined CPU and this will prevent queueing of }(hjhhhNhNubj)}(h**dwork->work**h]h dwork->work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. unless the offlined CPU becomes online again.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hj6h]hReturn}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h``false`` if **work** was already on a queue, ``true`` otherwise. If **delay** is zero and **dwork** is idle, it will be scheduled for immediate execution.h](j)}(h ``false``h]hfalse}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh if }(hjLhhhNhNubj)}(h**work**h]hwork}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh was already on a queue, }(hjLhhhNhNubj)}(h``true``h]htrue}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh otherwise. If }(hjLhhhNhNubj)}(h **delay**h]hdelay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh is zero and }(hjLhhhNhNubj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubh7 is idle, it will be scheduled for immediate execution.}(hjLhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j mod_delayed_work_on (C function)c.mod_delayed_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(hpbool mod_delayed_work_on (int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]j )}(hobool mod_delayed_work_on(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hj7&h]hbool}(hjщhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj͉hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hj߉hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj͉hhhjމhM ubjI)}(hmod_delayed_work_onh]jO)}(hmod_delayed_work_onh]hmod_delayed_work_on}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj͉hhhjމhM ubj)}(hW(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hint cpuh](j)}(hinth]hint}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubjO)}(hcpuh]hcpu}(hj)hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjBhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj>ubj8)}(h h]h }(hjOhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj>ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj`hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj]ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjbmodnameN classnameNjojr)}ju]jx)}jkjsbc.mod_delayed_work_onasbuh1hhj>ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj>ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubjO)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj>ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hjҊhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjϊubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjԊmodnameN classnameNjojr)}ju]j|c.mod_delayed_work_onasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hdworkh]hdwork}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj8)}(h h]h }(hj2hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubj)}(hlongh]hlong}(hj@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj8)}(h h]h }(hjNhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubjO)}(hdelayh]hdelay}(hj\hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhj͉hhhjމhM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjɉhhhjމhM ubah}(h]jĉah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjމhM hjƉhhubj{)}(hhh]h)}(h7modify delay of or queue a delayed work on specific CPUh]h7modify delay of or queue a delayed work on specific CPU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jzhjƉhhhjމhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** If **dwork** is idle, equivalent to queue_delayed_work_on(); otherwise, modify **dwork**'s timer so that it expires after **delay**. If **delay** is zero, **work** is guaranteed to be scheduled immediately regardless of its current state. This function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details. **Return** ``false`` if **dwork** was idle and queued, ``true`` if **dwork** was pending and its timer was modified.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjS)}(hhh](jX)}(h*``int cpu`` CPU number to execute work on h](j^)}(h ``int cpu``h]j)}(hjNjh]hint cpu}(hjɋhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjŋubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj܋hM hj݋ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj܋hM hjubjX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubjX)}(h-``struct delayed_work *dwork`` work to queue h](j^)}(h``struct delayed_work *dwork``h]j)}(hj9h]hstruct delayed_work *dwork}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj3ubjw)}(hhh]h)}(h work to queueh]h work to queue}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjNhM hjOubah}(h]h ]h"]h$]h&]uh1jvhj3ubeh}(h]h ]h"]h$]h&]uh1jWhjNhM hjubjX)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](j^)}(h``unsigned long delay``h]j)}(hjrh]hunsigned long delay}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjlubjw)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjlubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hIf **dwork** is idle, equivalent to queue_delayed_work_on(); otherwise, modify **dwork**'s timer so that it expires after **delay**. If **delay** is zero, **work** is guaranteed to be scheduled immediately regardless of its current state.h](hIf }(hjÌhhhNhNubj)}(h **dwork**h]hdwork}(hjˌhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjÌubhC is idle, equivalent to queue_delayed_work_on(); otherwise, modify }(hjÌhhhNhNubj)}(h **dwork**h]hdwork}(hj݌hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjÌubh$’s timer so that it expires after }(hjÌhhhNhNubj)}(h **delay**h]hdelay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjÌubh. If }(hjÌhhhNhNubj)}(h **delay**h]hdelay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjÌubh is zero, }(hjÌhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjÌubhK is guaranteed to be scheduled immediately regardless of its current state.}(hjÌhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hlThis function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details.h]hlThis function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details.}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hj=h]hReturn}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hi``false`` if **dwork** was idle and queued, ``true`` if **dwork** was pending and its timer was modified.h](j)}(h ``false``h]hfalse}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh if }(hjShhhNhNubj)}(h **dwork**h]hdwork}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh was idle and queued, }(hjShhhNhNubj)}(h``true``h]htrue}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh if }hjSsbj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh( was pending and its timer was modified.}(hjShhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jqueue_rcu_work (C function)c.queue_rcu_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hIbool queue_rcu_work (struct workqueue_struct *wq, struct rcu_work *rwork)h]j )}(hHbool queue_rcu_work(struct workqueue_struct *wq, struct rcu_work *rwork)h](j)}(hj7&h]hbool}(hjƍhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM@ ubj8)}(h h]h }(hjԍhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjӍhM@ ubjI)}(hqueue_rcu_workh]jO)}(hqueue_rcu_workh]hqueue_rcu_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjӍhM@ ubj)}(h5(struct workqueue_struct *wq, struct rcu_work *rwork)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj"modnameN classnameNjojr)}ju]jx)}jkjsbc.queue_rcu_workasbuh1hhjubj8)}(h h]h }(hj@hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hwqh]hwq}(hj[hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct rcu_work *rworkh](j&)}(hj)h]hstruct}(hjthhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjpubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjpubh)}(hhh]jO)}(hrcu_workh]hrcu_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]j<c.queue_rcu_workasbuh1hhjpubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjpubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjpubjO)}(hrworkh]hrwork}(hjˎhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjpubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjӍhM@ ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjӍhM@ ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjӍhM@ hjhhubj{)}(hhh]h)}(h#queue work after a RCU grace periodh]h#queue work after a RCU grace period}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM7 hjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjӍhM@ ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj jj jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue to use ``struct rcu_work *rwork`` work to queue **Return** ``false`` if **rwork** was already pending, ``true`` otherwise. Note that a full RCU grace period is guaranteed only after a ``true`` return. While **rwork** is guaranteed to be executed after a ``false`` return, the execution may happen before a full RCU grace period has passed.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM; hjubjS)}(hhh](jX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj6h]hstruct workqueue_struct *wq}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM8 hj0ubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjKhM8 hjLubah}(h]h ]h"]h$]h&]uh1jvhj0ubeh}(h]h ]h"]h$]h&]uh1jWhjKhM8 hj-ubjX)}(h)``struct rcu_work *rwork`` work to queue h](j^)}(h``struct rcu_work *rwork``h]j)}(hjoh]hstruct rcu_work *rwork}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9 hjiubjw)}(hhh]h)}(h work to queueh]h work to queue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM9 hjubah}(h]h ]h"]h$]h&]uh1jvhjiubeh}(h]h ]h"]h$]h&]uh1jWhjhM9 hj-ubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM; hjubh)}(hX``false`` if **rwork** was already pending, ``true`` otherwise. Note that a full RCU grace period is guaranteed only after a ``true`` return. While **rwork** is guaranteed to be executed after a ``false`` return, the execution may happen before a full RCU grace period has passed.h](j)}(h ``false``h]hfalse}(hjďhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h **rwork**h]hrwork}(hj֏hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was already pending, }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhJ otherwise. Note that a full RCU grace period is guaranteed only after a }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh return. While }(hjhhhNhNubj)}(h **rwork**h]hrwork}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh& is guaranteed to be executed after a }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhL return, the execution may happen before a full RCU grace period has passed.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM; hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"worker_attach_to_pool (C function)c.worker_attach_to_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(hLvoid worker_attach_to_pool (struct worker *worker, struct worker_pool *pool)h]j )}(hKvoid worker_attach_to_pool(struct worker *worker, struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjWhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjShhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMs ubj8)}(h h]h }(hjfhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjShhhjehMs ubjI)}(hworker_attach_to_poolh]jO)}(hworker_attach_to_poolh]hworker_attach_to_pool}(hjxhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjtubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjShhhjehMs ubj)}(h1(struct worker *worker, struct worker_pool *pool)h](j)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjzsbc.worker_attach_to_poolasbuh1hhjubj8)}(h h]h }(hjҐhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hj$hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj!ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj&modnameN classnameNjojr)}ju]jΐc.worker_attach_to_poolasbuh1hhjubj8)}(h h]h }(hjBhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjPhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hpoolh]hpool}(hj]hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjShhhjehMs ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjOhhhjehMs ubah}(h]jJah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjehMs hjLhhubj{)}(hhh]h)}(hattach a worker to a poolh]hattach a worker to a pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMk hjhhubah}(h]h ]h"]h$]h&]uh1jzhjLhhhjehMs ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX(**Parameters** ``struct worker *worker`` worker to be attached ``struct worker_pool *pool`` the target pool **Description** Attach **worker** to **pool**. Once attached, the ``WORKER_UNBOUND`` flag and cpu-binding of **worker** are kept coordinated with the pool across cpu-[un]hotplugs.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMo hjubjS)}(hhh](jX)}(h0``struct worker *worker`` worker to be attached h](j^)}(h``struct worker *worker``h]j)}(hjȑh]hstruct worker *worker}(hjʑhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjƑubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMl hj‘ubjw)}(hhh]h)}(hworker to be attachedh]hworker to be attached}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjݑhMl hjޑubah}(h]h ]h"]h$]h&]uh1jvhj‘ubeh}(h]h ]h"]h$]h&]uh1jWhjݑhMl hjubjX)}(h-``struct worker_pool *pool`` the target pool h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMm hjubjw)}(hhh]h)}(hthe target poolh]hthe target pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMm hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMm hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj<h]h Description}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMo hjubh)}(hAttach **worker** to **pool**. Once attached, the ``WORKER_UNBOUND`` flag and cpu-binding of **worker** are kept coordinated with the pool across cpu-[un]hotplugs.h](hAttach }(hjRhhhNhNubj)}(h **worker**h]hworker}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubh to }(hjRhhhNhNubj)}(h**pool**h]hpool}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubh. Once attached, the }(hjRhhhNhNubj)}(h``WORKER_UNBOUND``h]hWORKER_UNBOUND}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubh flag and cpu-binding of }(hjRhhhNhNubj)}(h **worker**h]hworker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubh< are kept coordinated with the pool across cpu-[un]hotplugs.}(hjRhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMo hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$worker_detach_from_pool (C function)c.worker_detach_from_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void worker_detach_from_pool (struct worker *worker)h]j )}(h3void worker_detach_from_pool(struct worker *worker)h](j)}(hvoidh]hvoid}(hjɒhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjŒhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjؒhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjŒhhhjגhM ubjI)}(hworker_detach_from_poolh]jO)}(hworker_detach_from_poolh]hworker_detach_from_pool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjŒhhhjגhM ubj)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkerh]hworker}(hj$hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj!ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj&modnameN classnameNjojr)}ju]jx)}jkjsbc.worker_detach_from_poolasbuh1hhjubj8)}(h h]h }(hjDhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkerh]hworker}(hj_hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjŒhhhjגhM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjגhM ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjגhM hjhhubj{)}(hhh]h)}(hdetach a worker from its poolh]hdetach a worker from its pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjגhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct worker *worker`` worker which is attached to its pool **Description** Undo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn't access to the pool after detached except it has other reference to the pool.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjS)}(hhh]jX)}(h?``struct worker *worker`` worker which is attached to its pool h](j^)}(h``struct worker *worker``h]j)}(hjʓh]hstruct worker *worker}(hj̓hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjȓubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjēubjw)}(hhh]h)}(h$worker which is attached to its poolh]h$worker which is attached to its pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjߓhM hjubah}(h]h ]h"]h$]h&]uh1jvhjēubeh}(h]h ]h"]h$]h&]uh1jWhjߓhM hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hUndo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn't access to the pool after detached except it has other reference to the pool.h]hUndo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn’t access to the pool after detached except it has other reference to the pool.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jcreate_worker (C function)c.create_workerhNtauh1jhjhhhNhNubj)}(hhh](j)}(h8struct worker * create_worker (struct worker_pool *pool)h]j )}(h6struct worker *create_worker(struct worker_pool *pool)h](j&)}(hj)h]hstruct}(hjJhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjFhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjXhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjFhhhjWhM ubh)}(hhh]jO)}(hworkerh]hworker}(hjihhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjfubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjkmodnameN classnameNjojr)}ju]jx)}jk create_workersbc.create_workerasbuh1hhjFhhhjWhM ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjFhhhjWhM ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFhhhjWhM ubjI)}(h create_workerh]jO)}(hjh]h create_worker}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjFhhhjWhM ubj)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hjĔhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjєhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjߔubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jc.create_workerasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjFhhhjWhM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjBhhhjWhM ubah}(h]j=ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjWhM hj?hhubj{)}(hhh]h)}(hcreate a new workqueue workerh]hcreate a new workqueue worker}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjBhhubah}(h]h ]h"]h$]h&]uh1jzhj?hhhjWhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj]jj]jjjuh1jhhhjhNhNubj)}(hX **Parameters** ``struct worker_pool *pool`` pool the new worker will belong to **Description** Create and start a new worker which is attached to **pool**. **Context** Might sleep. Does GFP_KERNEL allocations. **Return** Pointer to the newly created worker.h](h)}(h**Parameters**h]j)}(hjgh]h Parameters}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjeubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjaubjS)}(hhh]jX)}(h@``struct worker_pool *pool`` pool the new worker will belong to h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h"pool the new worker will belong toh]h"pool the new worker will belong to}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hj}ubah}(h]h ]h"]h$]h&]uh1jRhjaubh)}(h**Description**h]j)}(hjh]h Description}(hjÕhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjaubh)}(hidle_list and into list **Description** Tag **worker** for destruction and adjust **pool** stats accordingly. The worker should be idle. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6 hjubjS)}(hhh](jX)}(h1``struct worker *worker`` worker to be destroyed h](j^)}(h``struct worker *worker``h]j)}(hjחh]hstruct worker *worker}(hjٗhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj՗ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM3 hjїubjw)}(hhh]h)}(hworker to be destroyedh]hworker to be destroyed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM3 hjubah}(h]h ]h"]h$]h&]uh1jvhjїubeh}(h]h ]h"]h$]h&]uh1jWhjhM3 hjΗubjX)}(hW``struct list_head *list`` transfer worker away from its pool->idle_list and into list h](j^)}(h``struct list_head *list``h]j)}(hjh]hstruct list_head *list}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM4 hj ubjw)}(hhh]h)}(h;transfer worker away from its pool->idle_list and into listh]h;transfer worker away from its pool->idle_list and into list}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hM4 hj&ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhj%hM4 hjΗubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjKh]h Description}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6 hjubh)}(haTag **worker** for destruction and adjust **pool** stats accordingly. The worker should be idle.h](hTag }(hjahhhNhNubj)}(h **worker**h]hworker}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubh for destruction and adjust }(hjahhhNhNubj)}(h**pool**h]hpool}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubh/ stats accordingly. The worker should be idle.}(hjahhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6 hjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9 hjubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9 hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j idle_worker_timeout (C function)c.idle_worker_timeouthNtauh1jhjhhhNhNubj)}(hhh](j)}(h/void idle_worker_timeout (struct timer_list *t)h]j )}(h.void idle_worker_timeout(struct timer_list *t)h](j)}(hvoidh]hvoid}(hjۘhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjטhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM^ ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjטhhhjhM^ ubjI)}(hidle_worker_timeouth]jO)}(hidle_worker_timeouth]hidle_worker_timeout}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjטhhhjhM^ ubj)}(h(struct timer_list *t)h]j)}(hstruct timer_list *th](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hj%hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h timer_listh]h timer_list}(hj6hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj3ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj8modnameN classnameNjojr)}ju]jx)}jkjsbc.idle_worker_timeoutasbuh1hhjubj8)}(h h]h }(hjVhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hth]ht}(hjqhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjטhhhjhM^ ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjӘhhhjhM^ ubah}(h]jΘah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM^ hjИhhubj{)}(hhh]h)}(h.check if some idle workers can now be deleted.h]h.check if some idle workers can now be deleted.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMU hjhhubah}(h]h ]h"]h$]h&]uh1jzhjИhhhjhM^ ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct timer_list *t`` The pool's idle_timer that just expired **Description** The timer is armed in worker_enter_idle(). Note that it isn't disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMY hjubjS)}(hhh]jX)}(hA``struct timer_list *t`` The pool's idle_timer that just expired h](j^)}(h``struct timer_list *t``h]j)}(hjܙh]hstruct timer_list *t}(hjޙhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjڙubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMV hj֙ubjw)}(hhh]h)}(h'The pool's idle_timer that just expiredh]h)The pool’s idle_timer that just expired}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMV hjubah}(h]h ]h"]h$]h&]uh1jvhj֙ubeh}(h]h ]h"]h$]h&]uh1jWhjhMV hjәubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMX hjubh)}(hXZThe timer is armed in worker_enter_idle(). Note that it isn't disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.h]hX\The timer is armed in worker_enter_idle(). Note that it isn’t disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMX hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jidle_cull_fn (C function)c.idle_cull_fnhNtauh1jhjhhhNhNubj)}(hhh](j)}(h,void idle_cull_fn (struct work_struct *work)h]j )}(h+void idle_cull_fn(struct work_struct *work)h](j)}(hvoidh]hvoid}(hj\hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjXhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjkhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjXhhhjjhM ubjI)}(h idle_cull_fnh]jO)}(h idle_cull_fnh]h idle_cull_fn}(hj}hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjyubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjXhhhjjhM ubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.idle_cull_fnasbuh1hhjubj8)}(h h]h }(hjךhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjXhhhjjhM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjThhhjjhM ubah}(h]jOah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjjhM hjQhhubj{)}(hhh]h)}(h.cull workers that have been idle for too long.h]h.cull workers that have been idle for too long.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM{ hjhhubah}(h]h ]h"]h$]h&]uh1jzhjQhhhjjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj4jj4jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct work_struct *work`` the pool's work for handling these idle workers **Description** This goes through a pool's idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds. We don't want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.h](h)}(h**Parameters**h]j)}(hj>h]h Parameters}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj8ubjS)}(hhh]jX)}(hM``struct work_struct *work`` the pool's work for handling these idle workers h](j^)}(h``struct work_struct *work``h]j)}(hj]h]hstruct work_struct *work}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM| hjWubjw)}(hhh]h)}(h/the pool's work for handling these idle workersh]h1the pool’s work for handling these idle workers}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjrhM| hjsubah}(h]h ]h"]h$]h&]uh1jvhjWubeh}(h]h ]h"]h$]h&]uh1jWhjrhM| hjTubah}(h]h ]h"]h$]h&]uh1jRhj8ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM~ hj8ubh)}(h{This goes through a pool's idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds.h]h}This goes through a pool’s idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM~ hj8ubh)}(hWe don't want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.h]hWe don’t want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj8ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j maybe_create_worker (C function)c.maybe_create_workerhNtauh1jhjhhhNhNubj)}(hhh](j)}(h3void maybe_create_worker (struct worker_pool *pool)h]j )}(h2void maybe_create_worker(struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhM ubjI)}(hmaybe_create_workerh]jO)}(hmaybe_create_workerh]hmaybe_create_worker}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhM ubj)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hj)hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj%ubj8)}(h h]h }(hj6hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj%ubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hjGhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjDubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjImodnameN classnameNjojr)}ju]jx)}jkjsbc.maybe_create_workerasbuh1hhj%ubj8)}(h h]h }(hjghhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj%ubj)}(hjah]h*}(hjuhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%ubjO)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj%ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj!ubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhM ubah}(h]jߛah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM hjhhubj{)}(hhh]h)}(h create a new worker if necessaryh]h create a new worker if necessary}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjĜjjĜjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct worker_pool *pool`` pool to create a new worker for **Description** Create a new worker for **pool** if necessary. **pool** is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on **pool** to resolve possible allocation deadlock. On return, need_to_create_worker() is guaranteed to be ``false`` and may_start_working() ``true``. LOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.h](h)}(h**Parameters**h]j)}(hjΜh]h Parameters}(hjМhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj̜ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjȜubjS)}(hhh]jX)}(h=``struct worker_pool *pool`` pool to create a new worker for h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(hpool to create a new worker forh]hpool to create a new worker for}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubah}(h]h ]h"]h$]h&]uh1jRhjȜubh)}(h**Description**h]j)}(hj(h]h Description}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjȜubh)}(hX+Create a new worker for **pool** if necessary. **pool** is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on **pool** to resolve possible allocation deadlock.h](hCreate a new worker for }(hj>hhhNhNubj)}(h**pool**h]hpool}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubh if necessary. }(hj>hhhNhNubj)}(h**pool**h]hpool}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubh is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on }(hj>hhhNhNubj)}(h**pool**h]hpool}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubh) to resolve possible allocation deadlock.}(hj>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjȜubh)}(hbOn return, need_to_create_worker() is guaranteed to be ``false`` and may_start_working() ``true``.h](h7On return, need_to_create_worker() is guaranteed to be }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and may_start_working() }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjȜubh)}(hLOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.h]hLOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjȜubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jmanage_workers (C function)c.manage_workershNtauh1jhjhhhNhNubj)}(hhh](j)}(h+bool manage_workers (struct worker *worker)h]j )}(h*bool manage_workers(struct worker *worker)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM! ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhM! ubjI)}(hmanage_workersh]jO)}(hmanage_workersh]hmanage_workers}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhM! ubj)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hj!hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hj.hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkerh]hworker}(hj?hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj<ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjAmodnameN classnameNjojr)}ju]jx)}jkjsbc.manage_workersasbuh1hhjubj8)}(h h]h }(hj_hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkerh]hworker}(hjzhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhM! ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjݝhhhjhM! ubah}(h]j؝ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM! hjڝhhubj{)}(hhh]h)}(hmanage worker poolh]hmanage worker pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jzhjڝhhhjhM! ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX)**Parameters** ``struct worker *worker`` self **Description** Assume the manager role and manage the worker pool **worker** belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function. The caller can safely start processing works on false return. On true return, it's guaranteed that need_to_create_worker() is false and may_start_working() is true. **Context** raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. **Return** ``false`` if the pool doesn't need management and the caller can safely start processing works, ``true`` if management function was performed and the conditions that the caller verified before calling the function may no longer be true.h](h)}(h**Parameters**h]j)}(hjƞh]h Parameters}(hjȞhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjĞubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjS)}(hhh]jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hjh]hstruct worker *worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjߞubjw)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjߞubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjܞubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj h]h Description}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hAssume the manager role and manage the worker pool **worker** belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function.h](h3Assume the manager role and manage the worker pool }(hj6hhhNhNubj)}(h **worker**h]hworker}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubh belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function.}(hj6hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hThe caller can safely start processing works on false return. On true return, it's guaranteed that need_to_create_worker() is false and may_start_working() is true.h]hThe caller can safely start processing works on false return. On true return, it’s guaranteed that need_to_create_worker() is false and may_start_working() is true.}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h **Context**h]j)}(hjhh]hContext}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(horaw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations.h]horaw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations.}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h``false`` if the pool doesn't need management and the caller can safely start processing works, ``true`` if management function was performed and the conditions that the caller verified before calling the function may no longer be true.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhY if the pool doesn’t need management and the caller can safely start processing works, }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if management function was performed and the conditions that the caller verified before calling the function may no longer be true.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jprocess_one_work (C function)c.process_one_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hGvoid process_one_work (struct worker *worker, struct work_struct *work)h]j )}(hFvoid process_one_work(struct worker *worker, struct work_struct *work)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMA ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMA ubjI)}(hprocess_one_workh]jO)}(hprocess_one_workh]hprocess_one_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMA ubj)}(h1(struct worker *worker, struct work_struct *work)h](j)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hj1hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj-ubj8)}(h h]h }(hj>hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj-ubh)}(hhh]jO)}(hworkerh]hworker}(hjOhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjLubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjQmodnameN classnameNjojr)}ju]jx)}jkjsbc.process_one_workasbuh1hhj-ubj8)}(h h]h }(hjohhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj-ubj)}(hjah]h*}(hj}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubjO)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj-ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj)ubj)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjàmodnameN classnameNjojr)}ju]jkc.process_one_workasbuh1hhjubj8)}(h h]h }(hjߠhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj)ubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMA ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMA ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMA hjhhubj{)}(hhh]h)}(hprocess single workh]hprocess single work}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM4 hj!hhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMA ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj<jj<jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct worker *worker`` self ``struct work_struct *work`` work to process **Description** Process **work**. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work. **Context** raw_spin_lock_irq(pool->lock) which is released and regrabbed.h](h)}(h**Parameters**h]j)}(hjFh]h Parameters}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM8 hj@ubjS)}(hhh](jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hjeh]hstruct worker *worker}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM5 hj_ubjw)}(hhh]h)}(hselfh]hself}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjzhM5 hj{ubah}(h]h ]h"]h$]h&]uh1jvhj_ubeh}(h]h ]h"]h$]h&]uh1jWhjzhM5 hj\ubjX)}(h-``struct work_struct *work`` work to process h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6 hjubjw)}(hhh]h)}(hwork to processh]hwork to process}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM6 hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM6 hj\ubeh}(h]h ]h"]h$]h&]uh1jRhj@ubh)}(h**Description**h]j)}(hj١h]h Description}(hjۡhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjסubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM8 hj@ubh)}(hX%Process **work**. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work.h](hProcess }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhX. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM8 hj@ubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM> hj@ubh)}(h>raw_spin_lock_irq(pool->lock) which is released and regrabbed.h]h>raw_spin_lock_irq(pool->lock) which is released and regrabbed.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM> hj@ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$process_scheduled_works (C function)c.process_scheduled_workshNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void process_scheduled_works (struct worker *worker)h]j )}(h3void process_scheduled_works(struct worker *worker)h](j)}(hvoidh]hvoid}(hjWhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjShhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjfhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjShhhjehM ubjI)}(hprocess_scheduled_worksh]jO)}(hprocess_scheduled_worksh]hprocess_scheduled_works}(hjxhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjtubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjShhhjehM ubj)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjzsbc.process_scheduled_worksasbuh1hhjubj8)}(h h]h }(hjҢhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjShhhjehM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjOhhhjehM ubah}(h]jJah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjehM hjLhhubj{)}(hhh]h)}(hprocess scheduled worksh]hprocess scheduled works}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jzhjLhhhjehM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj/jj/jjjuh1jhhhjhNhNubj)}(hXQ**Parameters** ``struct worker *worker`` self **Description** Process all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it. **Context** raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.h](h)}(h**Parameters**h]j)}(hj9h]h Parameters}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj3ubjS)}(hhh]jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hjXh]hstruct worker *worker}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjRubjw)}(hhh]h)}(hselfh]hself}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjmhM hjnubah}(h]h ]h"]h$]h&]uh1jvhjRubeh}(h]h ]h"]h$]h&]uh1jWhjmhM hjOubah}(h]h ]h"]h$]h&]uh1jRhj3ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj3ubh)}(hProcess all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it.h]hProcess all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj3ubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj3ubh)}(hQraw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.h]hQraw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.}(hjУhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hj3ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_thread (C function)c.worker_threadhNtauh1jhjhhhNhNubj)}(hhh](j)}(h"int worker_thread (void *__worker)h]j )}(h!int worker_thread(void *__worker)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhj hM ubjI)}(h worker_threadh]jO)}(h worker_threadh]h worker_thread}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhj hM ubj)}(h(void *__worker)h]j)}(hvoid *__workerh](j)}(hvoidh]hvoid}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubj8)}(h h]h }(hjJhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj8ubj)}(hjah]h*}(hjXhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubjO)}(h__workerh]h__worker}(hjehhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj8ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj4ubah}(h]h ]h"]h$]h&]jjuh1jhjhhhj hM ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhj hM ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj hM hjhhubj{)}(hhh]h)}(hthe worker thread functionh]hthe worker thread function}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhj hM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``void *__worker`` self **Description** The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread(). **Return** 0h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubjS)}(hhh]jX)}(h``void *__worker`` self h](j^)}(h``void *__worker``h]j)}(hjФh]hvoid *__worker}(hjҤhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjΤubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjʤubjw)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjʤubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjǤubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj h]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hX=The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread().h]hX=The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread().}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hj2h]hReturn}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(hjvh]h0}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jrescuer_thread (C function)c.rescuer_threadhNtauh1jhjhhhNhNubj)}(hhh](j)}(h$int rescuer_thread (void *__rescuer)h]j )}(h#int rescuer_thread(void *__rescuer)h](j)}(hinth]hint}(hjvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjrhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMp ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjrhhhjhMp ubjI)}(hrescuer_threadh]jO)}(hrescuer_threadh]hrescuer_thread}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjrhhhjhMp ubj)}(h(void *__rescuer)h]j)}(hvoid *__rescuerh](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjϥhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(h __rescuerh]h __rescuer}(hjܥhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjrhhhjhMp ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjnhhhjhMp ubah}(h]jiah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMp hjkhhubj{)}(hhh]h)}(hthe rescuer thread functionh]hthe rescuer thread function}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM\ hjhhubah}(h]h ]h"]h$]h&]uh1jzhjkhhhjhMp ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``void *__rescuer`` self **Description** Workqueue rescuer thread function. There's one rescuer for each workqueue which has WQ_MEM_RECLAIM set. Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves. When such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed. This should happen rarely. **Return** 0h](h)}(h**Parameters**h]j)}(hj(h]h Parameters}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM` hj"ubjS)}(hhh]jX)}(h``void *__rescuer`` self h](j^)}(h``void *__rescuer``h]j)}(hjGh]hvoid *__rescuer}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM] hjAubjw)}(hhh]h)}(hselfh]hself}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj\hM] hj]ubah}(h]h ]h"]h$]h&]uh1jvhjAubeh}(h]h ]h"]h$]h&]uh1jWhj\hM] hj>ubah}(h]h ]h"]h$]h&]uh1jRhj"ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM_ hj"ubh)}(hhWorkqueue rescuer thread function. There's one rescuer for each workqueue which has WQ_MEM_RECLAIM set.h]hjWorkqueue rescuer thread function. There’s one rescuer for each workqueue which has WQ_MEM_RECLAIM set.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM_ hj"ubh)}(hX(Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves.h]hX(Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMb hj"ubh)}(hWhen such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed.h]hWhen such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMh hj"ubh)}(hThis should happen rarely.h]hThis should happen rarely.}(hjŦhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMl hj"ubh)}(h **Return**h]j)}(hj֦h]hReturn}(hjئhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjԦubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMn hj"ubh)}(hjvh]h0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMn hj"ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j#check_flush_dependency (C function)c.check_flush_dependencyhNtauh1jhjhhhNhNubj)}(hhh](j)}(hsvoid check_flush_dependency (struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h]j )}(hrvoid check_flush_dependency(struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMuubj8)}(h h]h }(hj)hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhj(hMuubjI)}(hcheck_flush_dependencyh]jO)}(hcheck_flush_dependencyh]hcheck_flush_dependency}(hj;hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj7ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhj(hMuubj)}(hW(struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h](j)}(h"struct workqueue_struct *target_wqh](j&)}(hj)h]hstruct}(hjWhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjSubj8)}(h h]h }(hjdhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjSubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjuhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjrubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjwmodnameN classnameNjojr)}ju]jx)}jkj=sbc.check_flush_dependencyasbuh1hhjSubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjSubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjSubjO)}(h target_wqh]h target_wq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjSubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjOubj)}(hstruct work_struct *target_workh](j&)}(hj)h]hstruct}(hjɧhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjŧubj8)}(h h]h }(hj֧hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjŧubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jc.check_flush_dependencyasbuh1hhjŧubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjŧubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjŧubjO)}(h target_workh]h target_work}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjŧubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjOubj)}(hbool from_cancelh](j)}(hj7&h]hbool}(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubj8)}(h h]h }(hjFhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj5ubjO)}(h from_cancelh]h from_cancel}(hjThhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj5ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjOubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhj(hMuubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhj(hMuubah}(h]j ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj(hMuhjhhubj{)}(hhh]h)}(h!check for flush dependency sanityh]h!check for flush dependency sanity}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMihj{hhubah}(h]h ]h"]h$]h&]uh1jzhjhhhj(hMuubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct workqueue_struct *target_wq`` workqueue being flushed ``struct work_struct *target_work`` work item being flushed (NULL for workqueue flushes) ``bool from_cancel`` are we called from the work cancel path **Description** ``current`` is trying to flush the whole **target_wq** or **target_work** on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if **target_wq** doesn't have ``WQ_MEM_RECLAIM`` and verify that ``current`` is not reclaiming memory or running on a workqueue which doesn't have ``WQ_MEM_RECLAIM`` as that can break forward- progress guarantee leading to a deadlock.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMmhjubjS)}(hhh](jX)}(h?``struct workqueue_struct *target_wq`` workqueue being flushed h](j^)}(h&``struct workqueue_struct *target_wq``h]j)}(hjh]h"struct workqueue_struct *target_wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMjhjubjw)}(hhh]h)}(hworkqueue being flushedh]hworkqueue being flushed}(hjبhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjԨhMjhjըubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjԨhMjhjubjX)}(hY``struct work_struct *target_work`` work item being flushed (NULL for workqueue flushes) h](j^)}(h#``struct work_struct *target_work``h]j)}(hjh]hstruct work_struct *target_work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMkhjubjw)}(hhh]h)}(h4work item being flushed (NULL for workqueue flushes)h]h4work item being flushed (NULL for workqueue flushes)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMkhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj hMkhjubjX)}(h=``bool from_cancel`` are we called from the work cancel path h](j^)}(h``bool from_cancel``h]j)}(hj1h]hbool from_cancel}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMlhj+ubjw)}(hhh]h)}(h'are we called from the work cancel pathh]h'are we called from the work cancel path}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhMlhjGubah}(h]h ]h"]h$]h&]uh1jvhj+ubeh}(h]h ]h"]h$]h&]uh1jWhjFhMlhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjlh]h Description}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMnhjubh)}(hX``current`` is trying to flush the whole **target_wq** or **target_work** on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if **target_wq** doesn't have ``WQ_MEM_RECLAIM`` and verify that ``current`` is not reclaiming memory or running on a workqueue which doesn't have ``WQ_MEM_RECLAIM`` as that can break forward- progress guarantee leading to a deadlock.h](j)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is trying to flush the whole }(hjhhhNhNubj)}(h **target_wq**h]h target_wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh or }(hjhhhNhNubj)}(h**target_work**h]h target_work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if }(hjhhhNhNubj)}(h **target_wq**h]h target_wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh doesn’t have }(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjΩhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and verify that }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhI is not reclaiming memory or running on a workqueue which doesn’t have }(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhE as that can break forward- progress guarantee leading to a deadlock.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMnhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jinsert_wq_barrier (C function)c.insert_wq_barrierhNtauh1jhjhhhNhNubj)}(hhh](j)}(hvoid insert_wq_barrier (struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h]j )}(h~void insert_wq_barrier(struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h](j)}(hvoidh]hvoid}(hj+hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj'hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj:hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj'hhhj9hMubjI)}(hinsert_wq_barrierh]jO)}(hinsert_wq_barrierh]hinsert_wq_barrier}(hjLhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjHubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj'hhhj9hMubj)}(hh(struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h](j)}(hstruct pool_workqueue *pwqh](j&)}(hj)h]hstruct}(hjhhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjdubj8)}(h h]h }(hjuhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjdubh)}(hhh]jO)}(hpool_workqueueh]hpool_workqueue}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjNsbc.insert_wq_barrierasbuh1hhjdubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjdubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjdubjO)}(hpwqh]hpwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjdubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`ubj)}(hstruct wq_barrier *barrh](j&)}(hj)h]hstruct}(hjڪhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj֪ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj֪ubh)}(hhh]jO)}(h wq_barrierh]h wq_barrier}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jc.insert_wq_barrierasbuh1hhj֪ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj֪ubj)}(hjah]h*}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj֪ubjO)}(hbarrh]hbarr}(hj1hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj֪ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`ubj)}(hstruct work_struct *targeth](j&)}(hj)h]hstruct}(hjJhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjFubj8)}(h h]h }(hjWhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjFubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjeubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjjmodnameN classnameNjojr)}ju]jc.insert_wq_barrierasbuh1hhjFubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjFubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFubjO)}(htargeth]htarget}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjFubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`ubj)}(hstruct worker *workerh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjǫhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkerh]hworker}(hjثhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjիubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjګmodnameN classnameNjojr)}ju]jc.insert_wq_barrierasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`ubeh}(h]h ]h"]h$]h&]jjuh1jhj'hhhj9hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj#hhhj9hMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj9hMhj hhubj{)}(hhh]h)}(hinsert a barrier workh]hinsert a barrier work}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj8hhubah}(h]h ]h"]h$]h&]uh1jzhj hhhj9hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjSjjSjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct pool_workqueue *pwq`` pwq to insert barrier into ``struct wq_barrier *barr`` wq_barrier to insert ``struct work_struct *target`` target work to attach **barr** to ``struct worker *worker`` worker currently executing **target**, NULL if **target** is not executing **Description** **barr** is linked to **target** such that **barr** is completed only after **target** finishes execution. Please note that the ordering guarantee is observed only with respect to **target** and on the local cpu. Currently, a queued barrier can't be canceled. This is because try_to_grab_pending() can't determine whether the work to be grabbed is at the head of the queue and thus can't clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set. Note that when **worker** is non-NULL, **target** may be modified underneath us, so we can't reliably determine pwq from **target**. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hj]h]h Parameters}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjWubjS)}(hhh](jX)}(h:``struct pool_workqueue *pwq`` pwq to insert barrier into h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hj|h]hstruct pool_workqueue *pwq}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjzubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjvubjw)}(hhh]h)}(hpwq to insert barrier intoh]hpwq to insert barrier into}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjvubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjsubjX)}(h1``struct wq_barrier *barr`` wq_barrier to insert h](j^)}(h``struct wq_barrier *barr``h]j)}(hjh]hstruct wq_barrier *barr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hwq_barrier to inserth]hwq_barrier to insert}(hjάhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjʬhMhjˬubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjʬhMhjsubjX)}(hA``struct work_struct *target`` target work to attach **barr** to h](j^)}(h``struct work_struct *target``h]j)}(hjh]hstruct work_struct *target}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h!target work to attach **barr** toh](htarget work to attach }(hjhhhNhNubj)}(h**barr**h]hbarr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjsubjX)}(he``struct worker *worker`` worker currently executing **target**, NULL if **target** is not executing h](j^)}(h``struct worker *worker``h]j)}(hj9h]hstruct worker *worker}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj3ubjw)}(hhh]h)}(hJworker currently executing **target**, NULL if **target** is not executingh](hworker currently executing }(hjRhhhNhNubj)}(h **target**h]htarget}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubh , NULL if }(hjRhhhNhNubj)}(h **target**h]htarget}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubh is not executing}(hjRhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjNhMhjOubah}(h]h ]h"]h$]h&]uh1jvhj3ubeh}(h]h ]h"]h$]h&]uh1jWhjNhMhjsubeh}(h]h ]h"]h$]h&]uh1jRhjWubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjWubh)}(h**barr** is linked to **target** such that **barr** is completed only after **target** finishes execution. Please note that the ordering guarantee is observed only with respect to **target** and on the local cpu.h](j)}(h**barr**h]hbarr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is linked to }(hjhhhNhNubj)}(h **target**h]htarget}(hjĭhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh such that }(hjhhhNhNubj)}(h**barr**h]hbarr}(hj֭hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is completed only after }(hjhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh_ finishes execution. Please note that the ordering guarantee is observed only with respect to }(hjhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and on the local cpu.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjWubh)}(hXCurrently, a queued barrier can't be canceled. This is because try_to_grab_pending() can't determine whether the work to be grabbed is at the head of the queue and thus can't clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set.h]hX%Currently, a queued barrier can’t be canceled. This is because try_to_grab_pending() can’t determine whether the work to be grabbed is at the head of the queue and thus can’t clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjWubh)}(hNote that when **worker** is non-NULL, **target** may be modified underneath us, so we can't reliably determine pwq from **target**.h](hNote that when }(hj"hhhNhNubj)}(h **worker**h]hworker}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubh is non-NULL, }(hj"hhhNhNubj)}(h **target**h]htarget}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubhJ may be modified underneath us, so we can’t reliably determine pwq from }(hj"hhhNhNubj)}(h **target**h]htarget}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubh.}(hj"hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjWubh)}(h **Context**h]j)}(hjih]hContext}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjWubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjWubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&flush_workqueue_prep_pwqs (C function)c.flush_workqueue_prep_pwqshNtauh1jhjhhhNhNubj)}(hhh](j)}(h]bool flush_workqueue_prep_pwqs (struct workqueue_struct *wq, int flush_color, int work_color)h]j )}(h\bool flush_workqueue_prep_pwqs(struct workqueue_struct *wq, int flush_color, int work_color)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hflush_workqueue_prep_pwqsh]jO)}(hflush_workqueue_prep_pwqsh]hflush_workqueue_prep_pwqs}(hjήhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjʮubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h>(struct workqueue_struct *wq, int flush_color, int work_color)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj modnameN classnameNjojr)}ju]jx)}jkjЮsbc.flush_workqueue_prep_pwqsasbuh1hhjubj8)}(h h]h }(hj(hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hwqh]hwq}(hjChhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint flush_colorh](j)}(hinth]hint}(hj\hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjXubj8)}(h h]h }(hjjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjXubjO)}(h flush_colorh]h flush_color}(hjxhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjXubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint work_colorh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(h work_colorh]h work_color}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h#prepare pwqs for workqueue flushingh]h#prepare pwqs for workqueue flushing}(hjׯhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjԯhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hXa**Parameters** ``struct workqueue_struct *wq`` workqueue being flushed ``int flush_color`` new flush color, < 0 for no-op ``int work_color`` new work color, < 0 for no-op **Description** Prepare pwqs for workqueue flushing. If **flush_color** is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color's stay at -1 and ``false`` is returned. If any pwq has in flight commands, its pwq->flush_color is set to **flush_color**, **wq->nr_pwqs_to_flush** is updated accordingly, pwq wakeup logic is armed and ``true`` is returned. The caller should have initialized **wq->first_flusher** prior to calling this function with non-negative **flush_color**. If **flush_color** is negative, no flush color update is done and ``false`` is returned. If **work_color** is non-negative, all pwqs should have the same work_color which is previous to **work_color** and all will be advanced to **work_color**. **Context** mutex_lock(wq->mutex). **Return** ``true`` if **flush_color** >= 0 and there's something to flush. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h8``struct workqueue_struct *wq`` workqueue being flushed h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hworkqueue being flushedh]hworkqueue being flushed}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj-hMhj.ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj-hMhjubjX)}(h3``int flush_color`` new flush color, < 0 for no-op h](j^)}(h``int flush_color``h]j)}(hjQh]hint flush_color}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjKubjw)}(hhh]h)}(hnew flush color, < 0 for no-oph]hnew flush color, < 0 for no-op}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjfhMhjgubah}(h]h ]h"]h$]h&]uh1jvhjKubeh}(h]h ]h"]h$]h&]uh1jWhjfhMhjubjX)}(h1``int work_color`` new work color, < 0 for no-op h](j^)}(h``int work_color``h]j)}(hjh]hint work_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hnew work color, < 0 for no-oph]hnew work color, < 0 for no-op}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjŰh]h Description}(hjǰhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjðubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h$Prepare pwqs for workqueue flushing.h]h$Prepare pwqs for workqueue flushing.}(hj۰hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hXyIf **flush_color** is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color's stay at -1 and ``false`` is returned. If any pwq has in flight commands, its pwq->flush_color is set to **flush_color**, **wq->nr_pwqs_to_flush** is updated accordingly, pwq wakeup logic is armed and ``true`` is returned.h](hIf }(hjhhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color’s stay at -1 and }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhQ is returned. If any pwq has in flight commands, its pwq->flush_color is set to }(hjhhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h**wq->nr_pwqs_to_flush**h]hwq->nr_pwqs_to_flush}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh7 is updated accordingly, pwq wakeup logic is armed and }(hjhhhNhNubj)}(h``true``h]htrue}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is returned.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hThe caller should have initialized **wq->first_flusher** prior to calling this function with non-negative **flush_color**. If **flush_color** is negative, no flush color update is done and ``false`` is returned.h](h#The caller should have initialized }(hjShhhNhNubj)}(h**wq->first_flusher**h]hwq->first_flusher}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh2 prior to calling this function with non-negative }(hjShhhNhNubj)}(h**flush_color**h]h flush_color}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh. If }(hjShhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh0 is negative, no flush color update is done and }(hjShhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh is returned.}(hjShhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hIf **work_color** is non-negative, all pwqs should have the same work_color which is previous to **work_color** and all will be advanced to **work_color**.h](hIf }(hjhhhNhNubj)}(h**work_color**h]h work_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhP is non-negative, all pwqs should have the same work_color which is previous to }(hjhhhNhNubj)}(h**work_color**h]h work_color}(hjıhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and all will be advanced to }(hjhhhNhNubj)}(h**work_color**h]h work_color}(hjֱhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hmutex_lock(wq->mutex).h]hmutex_lock(wq->mutex).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hV``true`` if **flush_color** >= 0 and there's something to flush. ``false`` otherwise.h](j)}(h``true``h]htrue}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubh if }(hj.hhhNhNubj)}(h**flush_color**h]h flush_color}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubh) >= 0 and there’s something to flush. }(hj.hhhNhNubj)}(h ``false``h]hfalse}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubh otherwise.}(hj.hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j__flush_workqueue (C function)c.__flush_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void __flush_workqueue (struct workqueue_struct *wq)h]j )}(h3void __flush_workqueue(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMaubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMaubjI)}(h__flush_workqueueh]jO)}(h__flush_workqueueh]h__flush_workqueue}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMaubj)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj̲hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjȲubj8)}(h h]h }(hjٲhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjȲubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.__flush_workqueueasbuh1hhjȲubj8)}(h h]h }(hj hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjȲubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjȲubjO)}(hwqh]hwq}(hj%hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjȲubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjIJubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMaubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMaubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMahjhhubj{)}(hhh]h)}(h5ensure that any scheduled work has run to completion.h]h5ensure that any scheduled work has run to completion.}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM[hjLhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMaubeh}(h]h ](jfunctioneh"]h$]h&]jjjjgjjgjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct workqueue_struct *wq`` workqueue to flush **Description** This function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.h](h)}(h**Parameters**h]j)}(hjqh]h Parameters}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM_hjkubjS)}(hhh]jX)}(h3``struct workqueue_struct *wq`` workqueue to flush h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM\hjubjw)}(hhh]h)}(hworkqueue to flushh]hworkqueue to flush}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM\hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM\hjubah}(h]h ]h"]h$]h&]uh1jRhjkubh)}(h**Description**h]j)}(hj˳h]h Description}(hjͳhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjɳubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM^hjkubh)}(hThis function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.h]hThis function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM^hjkubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdrain_workqueue (C function)c.drain_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h2void drain_workqueue (struct workqueue_struct *wq)h]j )}(h1void drain_workqueue(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj hhhjhMubjI)}(hdrain_workqueueh]jO)}(hdrain_workqueueh]hdrain_workqueue}(hj1hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj-ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj hhhjhMubj)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjMhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjIubj8)}(h h]h }(hjZhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjIubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjkhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjhubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmmodnameN classnameNjojr)}ju]jx)}jkj3sbc.drain_workqueueasbuh1hhjIubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjIubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubjO)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjIubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubah}(h]h ]h"]h$]h&]jjuh1jhj hhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(hdrain a workqueueh]hdrain a workqueue}(hjдhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjʹhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue to drain **Description** Wait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on **wq** can queue further work items on it. **wq** is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h3``struct workqueue_struct *wq`` workqueue to drain h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubjw)}(hhh]h)}(hworkqueue to drainh]hworkqueue to drain}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj&hMhj'ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhj&hMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjLh]h Description}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hXzWait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on **wq** can queue further work items on it. **wq** is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.h](hWait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on }(hjbhhhNhNubj)}(h**wq**h]hwq}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubh& can queue further work items on it. }(hjbhhhNhNubj)}(h**wq**h]hwq}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubh is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.}(hjbhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jflush_work (C function) c.flush_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h*bool flush_work (struct work_struct *work)h]j )}(h)bool flush_work(struct work_struct *work)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjõhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjµhMubjI)}(h flush_workh]jO)}(h flush_workh]h flush_work}(hjյhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjѵubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjµhMubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkj׵sb c.flush_workasbuh1hhjubj8)}(h h]h }(hj/hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj=hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkh]hwork}(hjJhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjµhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjµhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjµhMhjhhubj{)}(hhh]h)}(h>wait for a work to finish executing the last queueing instanceh]h>wait for a work to finish executing the last queueing instance}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjqhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjµhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hXL**Parameters** ``struct work_struct *work`` the work to flush **Description** Wait until **work** has finished execution. **work** is guaranteed to be idle on return if it hasn't been requeued since flush started. **Return** ``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h/``struct work_struct *work`` the work to flush h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe work to flushh]hthe work to flush}(hjζhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjʶhMhj˶ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjʶhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hWait until **work** has finished execution. **work** is guaranteed to be idle on return if it hasn't been requeued since flush started.h](h Wait until }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh has finished execution. }(hjhhhNhNubj)}(h**work**h]hwork}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhU is guaranteed to be idle on return if it hasn’t been requeued since flush started.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hj;h]hReturn}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hc``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQubh: if flush_work() waited for the work to finish execution, }(hjQhhhNhNubj)}(h ``false``h]hfalse}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQubh if it was already idle.}(hjQhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jflush_delayed_work (C function)c.flush_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4bool flush_delayed_work (struct delayed_work *dwork)h]j )}(h3bool flush_delayed_work(struct delayed_work *dwork)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hflush_delayed_workh]jO)}(hflush_delayed_workh]hflush_delayed_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjܷhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjطubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjطubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkj·sbc.flush_delayed_workasbuh1hhjطubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjطubj)}(hjah]h*}(hj(hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjطubjO)}(hdworkh]hdwork}(hj5hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjطubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjԷubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h6wait for a dwork to finish executing the last queueingh]h6wait for a dwork to finish executing the last queueing}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj\hhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjwjjwjjjuh1jhhhjhNhNubj)}(hXz**Parameters** ``struct delayed_work *dwork`` the delayed work to flush **Description** Delayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of **dwork**. **Return** ``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj{ubjS)}(hhh]jX)}(h9``struct delayed_work *dwork`` the delayed work to flush h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe delayed work to flushh]hthe delayed work to flush}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhj{ubh)}(h**Description**h]j)}(hj۸h]h Description}(hjݸhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjٸubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj{ubh)}(hDelayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of **dwork**.h](hDelayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj{ubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj{ubh)}(hc``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh: if flush_work() waited for the work to finish execution, }(hj*hhhNhNubj)}(h ``false``h]hfalse}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh if it was already idle.}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj{ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jflush_rcu_work (C function)c.flush_rcu_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h,bool flush_rcu_work (struct rcu_work *rwork)h]j )}(h+bool flush_rcu_work(struct rcu_work *rwork)h](j)}(hj7&h]hbool}(hjyhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjuhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjuhhhjhMubjI)}(hflush_rcu_workh]jO)}(hflush_rcu_workh]hflush_rcu_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjuhhhjhMubj)}(h(struct rcu_work *rwork)h]j)}(hstruct rcu_work *rworkh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hj¹hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hrcu_workh]hrcu_work}(hjӹhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjйubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjչmodnameN classnameNjojr)}ju]jx)}jkjsbc.flush_rcu_workasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hrworkh]hrwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjuhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjqhhhjhMubah}(h]jlah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjnhhubj{)}(hhh]h)}(h6wait for a rwork to finish executing the last queueingh]h6wait for a rwork to finish executing the last queueing}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj5hhubah}(h]h ]h"]h$]h&]uh1jzhjnhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjPjjPjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct rcu_work *rwork`` the rcu work to flush **Return** ``true`` if flush_rcu_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hjZh]h Parameters}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjTubjS)}(hhh]jX)}(h1``struct rcu_work *rwork`` the rcu work to flush h](j^)}(h``struct rcu_work *rwork``h]j)}(hjyh]hstruct rcu_work *rwork}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjsubjw)}(hhh]h)}(hthe rcu work to flushh]hthe rcu work to flush}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjsubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjpubah}(h]h ]h"]h$]h&]uh1jRhjTubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjTubh)}(hg``true`` if flush_rcu_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hjκhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjʺubh> if flush_rcu_work() waited for the work to finish execution, }(hjʺhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjʺubh if it was already idle.}(hjʺhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjTubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jcancel_work_sync (C function)c.cancel_work_synchNtauh1jhjhhhNhNubj)}(hhh](j)}(h0bool cancel_work_sync (struct work_struct *work)h]j )}(h/bool cancel_work_sync(struct work_struct *work)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM/ubj8)}(h h]h }(hj'hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhj&hM/ubjI)}(hcancel_work_synch]jO)}(hcancel_work_synch]hcancel_work_sync}(hj9hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj5ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhj&hM/ubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjUhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjQubj8)}(h h]h }(hjbhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjQubh)}(hhh]jO)}(h work_structh]h work_struct}(hjshhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjpubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjumodnameN classnameNjojr)}ju]jx)}jkj;sbc.cancel_work_syncasbuh1hhjQubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjQubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubjO)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjQubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjMubah}(h]h ]h"]h$]h&]jjuh1jhjhhhj&hM/ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhj&hM/ubah}(h]j ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj&hM/hjhhubj{)}(hhh]h)}(h'cancel a work and wait for it to finishh]h'cancel a work and wait for it to finish}(hjػhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjջhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhj&hM/ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct work_struct *work`` the work to cancel **Description** Cancel **work** and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, **work** is guaranteed to be not pending or executing on any CPU as long as there aren't racing enqueues. cancel_work_sync(:c:type:`delayed_work->work `) must not be used for delayed_work's. Use cancel_delayed_work_sync() instead. Must be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM"hjubjS)}(hhh]jX)}(h0``struct work_struct *work`` the work to cancel h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe work to cancelh]hthe work to cancel}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj.hMhj/ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj.hMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjTh]h Description}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM!hjubh)}(hXCancel **work** and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, **work** is guaranteed to be not pending or executing on any CPU as long as there aren't racing enqueues.h](hCancel }(hjjhhhNhNubj)}(h**work**h]hwork}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubh and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, }(hjjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubhc is guaranteed to be not pending or executing on any CPU as long as there aren’t racing enqueues.}(hjjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM!hjubh)}(hcancel_work_sync(:c:type:`delayed_work->work `) must not be used for delayed_work's. Use cancel_delayed_work_sync() instead.h](hcancel_work_sync(}(hjhhhNhNubh)}(h+:c:type:`delayed_work->work `h]j)}(hjh]hdelayed_work->work}(hjhhhNhNubah}(h]h ](xrefjc-typeeh"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]refdoccore-api/workqueue refdomainjreftypetype refexplicitrefwarnjojr)}ju]sb reftarget delayed_workuh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM&hjubhP) must not be used for delayed_work’s. Use cancel_delayed_work_sync() instead.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjʼhM&hjubh)}(hMust be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue.h](h+Must be called from a sleepable context if }(hjռhhhNhNubj)}(h**work**h]hwork}(hjݼhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjռubhl was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if }(hjռhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjռubh# was last queued on a BH workqueue.}(hjռhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM)hjubh)}(h>Returns ``true`` if **work** was pending, ``false`` otherwise.h](hReturns }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM-hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j cancel_delayed_work (C function)c.cancel_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h5bool cancel_delayed_work (struct delayed_work *dwork)h]j )}(h4bool cancel_delayed_work(struct delayed_work *dwork)h](j)}(hj7&h]hbool}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjihhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMEubj8)}(h h]h }(hj{hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjihhhjzhMEubjI)}(hcancel_delayed_workh]jO)}(hcancel_delayed_workh]hcancel_delayed_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjihhhjzhMEubj)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hjǽhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjĽubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjɽmodnameN classnameNjojr)}ju]jx)}jkjsbc.cancel_delayed_workasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjihhhjzhMEubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjehhhjzhMEubah}(h]j`ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjzhMEhjbhhubj{)}(hhh]h)}(hcancel a delayed workh]hcancel a delayed work}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6hj)hhubah}(h]h ]h"]h$]h&]uh1jzhjbhhhjzhMEubeh}(h]h ](jfunctioneh"]h$]h&]jjjjDjjDjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct delayed_work *dwork`` delayed_work to cancel **Description** Kill off a pending delayed_work. This function is safe to call from any context including IRQ handler. **Return** ``true`` if **dwork** was pending and canceled; ``false`` if it wasn't pending. **Note** The work callback function may still be running on return, unless it returns ``true`` and the work doesn't re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.h](h)}(h**Parameters**h]j)}(hjNh]h Parameters}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM:hjHubjS)}(hhh]jX)}(h6``struct delayed_work *dwork`` delayed_work to cancel h](j^)}(h``struct delayed_work *dwork``h]j)}(hjmh]hstruct delayed_work *dwork}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM7hjgubjw)}(hhh]h)}(hdelayed_work to cancelh]hdelayed_work to cancel}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM7hjubah}(h]h ]h"]h$]h&]uh1jvhjgubeh}(h]h ]h"]h$]h&]uh1jWhjhM7hjdubah}(h]h ]h"]h$]h&]uh1jRhjHubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9hjHubh)}(h Kill off a pending delayed_work.h]h Kill off a pending delayed_work.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9hjHubh)}(hEThis function is safe to call from any context including IRQ handler.h]hEThis function is safe to call from any context including IRQ handler.}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM;hjHubh)}(h **Return**h]j)}(hj޾h]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjܾubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM=hjHubh)}(hO``true`` if **dwork** was pending and canceled; ``false`` if it wasn't pending.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending and canceled; }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if it wasn’t pending.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM;hjHubh)}(h**Note**h]j)}(hj7h]hNote}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj5ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM>hjHubh)}(hThe work callback function may still be running on return, unless it returns ``true`` and the work doesn't re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.h](hMThe work callback function may still be running on return, unless it returns }(hjMhhhNhNubj)}(h``true``h]htrue}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubhi and the work doesn’t re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.}(hjMhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM>hjHubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%cancel_delayed_work_sync (C function)c.cancel_delayed_work_synchNtauh1jhjhhhNhNubj)}(hhh](j)}(h:bool cancel_delayed_work_sync (struct delayed_work *dwork)h]j )}(h9bool cancel_delayed_work_sync(struct delayed_work *dwork)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMTubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMTubjI)}(hcancel_delayed_work_synch]jO)}(hcancel_delayed_work_synch]hcancel_delayed_work_sync}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMTubj)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjʿhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjƿubj8)}(h h]h }(hj׿hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjƿubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.cancel_delayed_work_syncasbuh1hhjƿubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjƿubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjƿubjO)}(hdworkh]hdwork}(hj#hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjƿubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj¿ubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMTubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMTubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMThjhhubj{)}(hhh]h)}(h/cancel a delayed work and wait for it to finishh]h/cancel a delayed work and wait for it to finish}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMLhjJhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMTubeh}(h]h ](jfunctioneh"]h$]h&]jjjjejjejjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` the delayed work cancel **Description** This is cancel_work_sync() for delayed works. **Return** ``true`` if **dwork** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjoh]h Parameters}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMPhjiubjS)}(hhh]jX)}(h7``struct delayed_work *dwork`` the delayed work cancel h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMMhjubjw)}(hhh]h)}(hthe delayed work cancelh]hthe delayed work cancel}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMMhjubah}(h]h ]h"]h$]h&]uh1jRhjiubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMOhjiubh)}(h-This is cancel_work_sync() for delayed works.h]h-This is cancel_work_sync() for delayed works.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMOhjiubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMQhjiubh)}(h7``true`` if **dwork** was pending, ``false`` otherwise.h](j)}(h``true``h]htrue}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMQhjiubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdisable_work (C function)c.disable_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h,bool disable_work (struct work_struct *work)h]j )}(h+bool disable_work(struct work_struct *work)h](j)}(hj7&h]hbool}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjchhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMfubj8)}(h h]h }(hjuhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjchhhjthMfubjI)}(h disable_workh]jO)}(h disable_workh]h disable_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjchhhjthMfubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.disable_workasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjchhhjthMfubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj_hhhjthMfubah}(h]jZah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjthMfhj\hhubj{)}(hhh]h)}(hDisable and cancel a work itemh]hDisable and cancel a work item}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM[hj#hhubah}(h]h ]h"]h$]h&]uh1jzhj\hhhjthMfubeh}(h]h ](jfunctioneh"]h$]h&]jjjj>jj>jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work item to disable **Description** Disable **work** by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue **work** will fail and return ``false``. The maximum supported disable depth is 2 to the power of ``WORK_OFFQ_DISABLE_BITS``, currently 65536. Can be called from any context. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjHh]h Parameters}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM_hjBubjS)}(hhh]jX)}(h2``struct work_struct *work`` work item to disable h](j^)}(h``struct work_struct *work``h]j)}(hjgh]hstruct work_struct *work}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjeubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM\hjaubjw)}(hhh]h)}(hwork item to disableh]hwork item to disable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj|hM\hj}ubah}(h]h ]h"]h$]h&]uh1jvhjaubeh}(h]h ]h"]h$]h&]uh1jWhj|hM\hj^ubah}(h]h ]h"]h$]h&]uh1jRhjBubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM^hjBubh)}(hX$Disable **work** by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue **work** will fail and return ``false``. The maximum supported disable depth is 2 to the power of ``WORK_OFFQ_DISABLE_BITS``, currently 65536.h](hDisable }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue ?}(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh will fail and return }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh;. The maximum supported disable depth is 2 to the power of }(hjhhhNhNubj)}(h``WORK_OFFQ_DISABLE_BITS``h]hWORK_OFFQ_DISABLE_BITS}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, currently 65536.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM^hjBubh)}(h^Can be called from any context. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h(Can be called from any context. Returns }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMchjBubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdisable_work_sync (C function)c.disable_work_synchNtauh1jhjhhhNhNubj)}(hhh](j)}(h1bool disable_work_sync (struct work_struct *work)h]j )}(h0bool disable_work_sync(struct work_struct *work)h](j)}(hj7&h]hbool}(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjphhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMyubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjphhhjhMyubjI)}(hdisable_work_synch]jO)}(hdisable_work_synch]hdisable_work_sync}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjphhhjhMyubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.disable_work_syncasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkh]hwork}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjphhhjhMyubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjlhhhjhMyubah}(h]jgah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMyhjihhubj{)}(hhh]h)}(h%Disable, cancel and drain a work itemh]h%Disable, cancel and drain a work item}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMmhj0hhubah}(h]h ]h"]h$]h&]uh1jzhjihhhjhMyubeh}(h]h ](jfunctioneh"]h$]h&]jjjjKjjKjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work item to disable **Description** Similar to disable_work() but also wait for **work** to finish if currently executing. Must be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjUh]h Parameters}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMqhjOubjS)}(hhh]jX)}(h2``struct work_struct *work`` work item to disable h](j^)}(h``struct work_struct *work``h]j)}(hjth]hstruct work_struct *work}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMnhjnubjw)}(hhh]h)}(hwork item to disableh]hwork item to disable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMnhjubah}(h]h ]h"]h$]h&]uh1jvhjnubeh}(h]h ]h"]h$]h&]uh1jWhjhMnhjkubah}(h]h ]h"]h$]h&]uh1jRhjOubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMphjOubh)}(hVSimilar to disable_work() but also wait for **work** to finish if currently executing.h](h,Similar to disable_work() but also wait for }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh" to finish if currently executing.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMphjOubh)}(hMust be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue.h](h+Must be called from a sleepable context if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhl was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh# was last queued on a BH workqueue.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMshjOubh)}(h>Returns ``true`` if **work** was pending, ``false`` otherwise.h](hReturns }(hjhhhNhNubj)}(h``true``h]htrue}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMwhjOubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jenable_work (C function) c.enable_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h+bool enable_work (struct work_struct *work)h]j )}(h*bool enable_work(struct work_struct *work)h](j)}(hj7&h]hbool}(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjzhhhjhMubjI)}(h enable_workh]jO)}(h enable_workh]h enable_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjzhhhjhMubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsb c.enable_workasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjzhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjvhhhjhMubah}(h]jqah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjshhubj{)}(hhh]h)}(hEnable a work itemh]hEnable a work item}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj:hhubah}(h]h ]h"]h$]h&]uh1jzhjshhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjUjjUjjjuh1jhhhjhNhNubj)}(hX8**Parameters** ``struct work_struct *work`` work item to enable **Description** Undo disable_work[_sync]() by decrementing **work**'s disable count. **work** can only be queued if its disable count is 0. Can be called from any context. Returns ``true`` if the disable count reached 0. Otherwise, ``false``.h](h)}(h**Parameters**h]j)}(hj_h]h Parameters}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjYubjS)}(hhh]jX)}(h1``struct work_struct *work`` work item to enable h](j^)}(h``struct work_struct *work``h]j)}(hj~h]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjxubjw)}(hhh]h)}(hwork item to enableh]hwork item to enable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjxubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjuubah}(h]h ]h"]h$]h&]uh1jRhjYubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjYubh)}(h{Undo disable_work[_sync]() by decrementing **work**'s disable count. **work** can only be queued if its disable count is 0.h](h+Undo disable_work[_sync]() by decrementing }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh’s disable count. }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. can only be queued if its disable count is 0.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjYubh)}(hfCan be called from any context. Returns ``true`` if the disable count reached 0. Otherwise, ``false``.h](h(Can be called from any context. Returns }(hjhhhNhNubj)}(h``true``h]htrue}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, if the disable count reached 0. Otherwise, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjYubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!disable_delayed_work (C function)c.disable_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h6bool disable_delayed_work (struct delayed_work *dwork)h]j )}(h5bool disable_delayed_work(struct delayed_work *dwork)h](j)}(hj7&h]hbool}(hjUhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjchhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjQhhhjbhMubjI)}(hdisable_delayed_workh]jO)}(hdisable_delayed_workh]hdisable_delayed_work}(hjuhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjqubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjQhhhjbhMubj)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjwsbc.disable_delayed_workasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjQhhhjbhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjMhhhjbhMubah}(h]jHah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjbhMhjJhhubj{)}(hhh]h)}(h&Disable and cancel a delayed work itemh]h&Disable and cancel a delayed work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjJhhhjbhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj,jj,jjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to disable **Description** disable_work() for delayed work items.h](h)}(h**Parameters**h]j)}(hj6h]h Parameters}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj0ubjS)}(hhh]jX)}(h<``struct delayed_work *dwork`` delayed work item to disable h](j^)}(h``struct delayed_work *dwork``h]j)}(hjUh]hstruct delayed_work *dwork}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjOubjw)}(hhh]h)}(hdelayed work item to disableh]hdelayed work item to disable}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjjhMhjkubah}(h]h ]h"]h$]h&]uh1jvhjOubeh}(h]h ]h"]h$]h&]uh1jWhjjhMhjLubah}(h]h ]h"]h$]h&]uh1jRhj0ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj0ubh)}(h&disable_work() for delayed work items.h]h&disable_work() for delayed work items.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj0ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&disable_delayed_work_sync (C function)c.disable_delayed_work_synchNtauh1jhjhhhNhNubj)}(hhh](j)}(h;bool disable_delayed_work_sync (struct delayed_work *dwork)h]j )}(h:bool disable_delayed_work_sync(struct delayed_work *dwork)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hdisable_delayed_work_synch]jO)}(hdisable_delayed_work_synch]hdisable_delayed_work_sync}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hj/hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj,ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj1modnameN classnameNjojr)}ju]jx)}jkjsbc.disable_delayed_work_syncasbuh1hhj ubj8)}(h h]h }(hjOhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubj)}(hjah]h*}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubjO)}(hdworkh]hdwork}(hjjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h-Disable, cancel and drain a delayed work itemh]h-Disable, cancel and drain a delayed work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to disable **Description** disable_work_sync() for delayed work items.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h<``struct delayed_work *dwork`` delayed work item to disable h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hdelayed work item to disableh]hdelayed work item to disable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h+disable_work_sync() for delayed work items.h]h+disable_work_sync() for delayed work items.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j enable_delayed_work (C function)c.enable_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h5bool enable_delayed_work (struct delayed_work *dwork)h]j )}(h4bool enable_delayed_work(struct delayed_work *dwork)h](j)}(hj7&h]hbool}(hjUhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjchhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjQhhhjbhMubjI)}(henable_delayed_workh]jO)}(henable_delayed_workh]henable_delayed_work}(hjuhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjqubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjQhhhjbhMubj)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjwsbc.enable_delayed_workasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjQhhhjbhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjMhhhjbhMubah}(h]jHah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjbhMhjJhhubj{)}(hhh]h)}(hEnable a delayed work itemh]hEnable a delayed work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjJhhhjbhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj,jj,jjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to enable **Description** enable_work() for delayed work items.h](h)}(h**Parameters**h]j)}(hj6h]h Parameters}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj0ubjS)}(hhh]jX)}(h;``struct delayed_work *dwork`` delayed work item to enable h](j^)}(h``struct delayed_work *dwork``h]j)}(hjUh]hstruct delayed_work *dwork}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjOubjw)}(hhh]h)}(hdelayed work item to enableh]hdelayed work item to enable}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjjhMhjkubah}(h]h ]h"]h$]h&]uh1jvhjOubeh}(h]h ]h"]h$]h&]uh1jWhjjhMhjLubah}(h]h ]h"]h$]h&]uh1jRhj0ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj0ubh)}(h%enable_work() for delayed work items.h]h%enable_work() for delayed work items.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj0ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!schedule_on_each_cpu (C function)c.schedule_on_each_cpuhNtauh1jhjhhhNhNubj)}(hhh](j)}(h+int schedule_on_each_cpu (work_func_t func)h]j )}(h*int schedule_on_each_cpu(work_func_t func)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hschedule_on_each_cpuh]jO)}(hschedule_on_each_cpuh]hschedule_on_each_cpu}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(work_func_t func)h]j)}(hwork_func_t funch](h)}(hhh]jO)}(h work_func_th]h work_func_t}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.schedule_on_each_cpuasbuh1hhjubj8)}(h h]h }(hj5hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(hfunch]hfunc}(hjChhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h3execute a function synchronously on each online CPUh]h3execute a function synchronously on each online CPU}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX!**Parameters** ``work_func_t func`` the function to call **Description** schedule_on_each_cpu() executes **func** on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow. **Return** 0 on success, -errno on failure.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h*``work_func_t func`` the function to call h](j^)}(h``work_func_t func``h]j)}(hjh]hwork_func_t func}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe function to callh]hthe function to call}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hschedule_on_each_cpu() executes **func** on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow.h](h schedule_on_each_cpu() executes }(hjhhhNhNubj)}(h**func**h]hfunc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh} on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hj"h]hReturn}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h 0 on success, -errno on failure.h]h 0 on success, -errno on failure.}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j'execute_in_process_context (C function)c.execute_in_process_contexthNtauh1jhjhhhNhNubj)}(hhh](j)}(hHint execute_in_process_context (work_func_t fn, struct execute_work *ew)h]j )}(hGint execute_in_process_context(work_func_t fn, struct execute_work *ew)h](j)}(hinth]hint}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjchhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjvhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjchhhjuhMubjI)}(hexecute_in_process_contexth]jO)}(hexecute_in_process_contexth]hexecute_in_process_context}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjchhhjuhMubj)}(h)(work_func_t fn, struct execute_work *ew)h](j)}(hwork_func_t fnh](h)}(hhh]jO)}(h work_func_th]h work_func_t}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.execute_in_process_contextasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(hfnh]hfn}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct execute_work *ewh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h execute_workh]h execute_work}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jc.execute_in_process_contextasbuh1hhjubj8)}(h h]h }(hj*hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hewh]hew}(hjEhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjchhhjuhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj_hhhjuhMubah}(h]jZah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjuhMhj\hhubj{)}(hhh]h)}(h.reliably execute the routine with user contexth]h.reliably execute the routine with user context}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjlhhubah}(h]h ]h"]h$]h&]uh1jzhj\hhhjuhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``work_func_t fn`` the function to execute ``struct execute_work *ew`` guaranteed storage for the execute work structure (must be available when the work executes) **Description** Executes the function immediately if process context is available, otherwise schedules the function for delayed execution. **Return** 0 - function was executed 1 - function was scheduled for executionh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h+``work_func_t fn`` the function to execute h](j^)}(h``work_func_t fn``h]j)}(hjh]hwork_func_t fn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe function to executeh]hthe function to execute}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubjX)}(hy``struct execute_work *ew`` guaranteed storage for the execute work structure (must be available when the work executes) h](j^)}(h``struct execute_work *ew``h]j)}(hjh]hstruct execute_work *ew}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h\guaranteed storage for the execute work structure (must be available when the work executes)h]h\guaranteed storage for the execute work structure (must be available when the work executes)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj%h]h Description}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hzExecutes the function immediately if process context is available, otherwise schedules the function for delayed execution.h]hzExecutes the function immediately if process context is available, otherwise schedules the function for delayed execution.}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjLh]hReturn}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(hB0 - function was executed 1 - function was scheduled for executionh](j^)}(h0 - function was executedh]h0 - function was executed}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjeubjw)}(hhh]h)}(h(1 - function was scheduled for executionh]h(1 - function was scheduled for execution}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjxubah}(h]h ]h"]h$]h&]uh1jvhjeubeh}(h]h ]h"]h$]h&]uh1jWhjwhMhjbubah}(h]h ]h"]h$]h&]uh1jRhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!free_workqueue_attrs (C function)c.free_workqueue_attrshNtauh1jhjhhhNhNubj)}(hhh](j)}(h9void free_workqueue_attrs (struct workqueue_attrs *attrs)h]j )}(h8void free_workqueue_attrs(struct workqueue_attrs *attrs)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hfree_workqueue_attrsh]jO)}(hfree_workqueue_attrsh]hfree_workqueue_attrs}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(struct workqueue_attrs *attrs)h]j)}(hstruct workqueue_attrs *attrsh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_attrsh]hworkqueue_attrs}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.free_workqueue_attrsasbuh1hhjubj8)}(h h]h }(hj7hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hattrsh]hattrs}(hjRhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(hfree a workqueue_attrsh]hfree a workqueue_attrs}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjyhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(h{**Parameters** ``struct workqueue_attrs *attrs`` workqueue_attrs to free **Description** Undo alloc_workqueue_attrs().h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h:``struct workqueue_attrs *attrs`` workqueue_attrs to free h](j^)}(h!``struct workqueue_attrs *attrs``h]j)}(hjh]hstruct workqueue_attrs *attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hworkqueue_attrs to freeh]hworkqueue_attrs to free}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hUndo alloc_workqueue_attrs().h]hUndo alloc_workqueue_attrs().}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"alloc_workqueue_attrs (C function)c.alloc_workqueue_attrshNtauh1jhjhhhNhNubj)}(hhh](j)}(h5struct workqueue_attrs * alloc_workqueue_attrs (void)h]j )}(h3struct workqueue_attrs *alloc_workqueue_attrs(void)h](j&)}(hj)h]hstruct}(hj=hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj9hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjKhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj9hhhjJhMubh)}(hhh]jO)}(hworkqueue_attrsh]hworkqueue_attrs}(hj\hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjYubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj^modnameN classnameNjojr)}ju]jx)}jkalloc_workqueue_attrssbc.alloc_workqueue_attrsasbuh1hhj9hhhjJhMubj8)}(h h]h }(hj}hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj9hhhjJhMubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj9hhhjJhMubjI)}(halloc_workqueue_attrsh]jO)}(hjzh]halloc_workqueue_attrs}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj9hhhjJhMubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhj9hhhjJhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj5hhhjJhMubah}(h]j0ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjJhMhj2hhubj{)}(hhh]h)}(hallocate a workqueue_attrsh]hallocate a workqueue_attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhj2hhhjJhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(h**Parameters** ``void`` no arguments **Description** Allocate a new workqueue_attrs, initialize with default settings and return it. **Return** The allocated new workqueue_attr on success. ``NULL`` on failure.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hj"h]hvoid}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj7hMhj8ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj7hMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj]h]h Description}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hOAllocate a new workqueue_attrs, initialize with default settings and return it.h]hOAllocate a new workqueue_attrs, initialize with default settings and return it.}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hAThe allocated new workqueue_attr on success. ``NULL`` on failure.h](h-The allocated new workqueue_attr on success. }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh on failure.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jinit_worker_pool (C function)c.init_worker_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h/int init_worker_pool (struct worker_pool *pool)h]j )}(h.int init_worker_pool(struct worker_pool *pool)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hinit_worker_poolh]jO)}(hinit_worker_poolh]hinit_worker_pool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hj%hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hj6hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj3ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj8modnameN classnameNjojr)}ju]jx)}jkjsbc.init_worker_poolasbuh1hhjubj8)}(h h]h }(hjVhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hpoolh]hpool}(hjqhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h'initialize a newly zalloc'd worker_poolh]h)initialize a newly zalloc’d worker_pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX]**Parameters** ``struct worker_pool *pool`` worker_pool to initialize **Description** Initialize a newly zalloc'd **pool**. It also allocates **pool->attrs**. **Return** 0 on success, -errno on failure. Even on failure, all fields inside **pool** proper are initialized and put_unbound_pool() can be called on **pool** safely to release it.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h7``struct worker_pool *pool`` worker_pool to initialize h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hworker_pool to initializeh]hworker_pool to initialize}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hIInitialize a newly zalloc'd **pool**. It also allocates **pool->attrs**.h](hInitialize a newly zalloc’d }(hj-hhhNhNubj)}(h**pool**h]hpool}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubh. It also allocates }(hj-hhhNhNubj)}(h**pool->attrs**h]h pool->attrs}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubh.}(hj-hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjbh]hReturn}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h0 on success, -errno on failure. Even on failure, all fields inside **pool** proper are initialized and put_unbound_pool() can be called on **pool** safely to release it.h](hE0 on success, -errno on failure. Even on failure, all fields inside }(hjxhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxubh@ proper are initialized and put_unbound_pool() can be called on }(hjxhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxubh safely to release it.}(hjxhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jput_unbound_pool (C function)c.put_unbound_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h0void put_unbound_pool (struct worker_pool *pool)h]j )}(h/void put_unbound_pool(struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM?ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhM?ubjI)}(hput_unbound_poolh]jO)}(hput_unbound_poolh]hput_unbound_pool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhM?ubj)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hj&hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj#ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj(modnameN classnameNjojr)}ju]jx)}jkjsbc.put_unbound_poolasbuh1hhjubj8)}(h h]h }(hjFhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjThhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hpoolh]hpool}(hjahhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhM?ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhM?ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM?hjhhubj{)}(hhh]h)}(hput a worker_poolh]hput a worker_pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM5hjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhM?ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hXz**Parameters** ``struct worker_pool *pool`` worker_pool to put **Description** Put **pool**. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool(). Should be called with wq_pool_mutex held.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM9hjubjS)}(hhh]jX)}(h0``struct worker_pool *pool`` worker_pool to put h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6hjubjw)}(hhh]h)}(hworker_pool to puth]hworker_pool to put}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM6hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM6hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM8hjubh)}(hPut **pool**. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool().h](hPut }(hjhhhNhNubj)}(h**pool**h]hpool}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool().}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM8hjubh)}(h)Should be called with wq_pool_mutex held.h]h)Should be called with wq_pool_mutex held.}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM=hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jget_unbound_pool (C function)c.get_unbound_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(hKstruct worker_pool * get_unbound_pool (const struct workqueue_attrs *attrs)h]j )}(hIstruct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)h](j&)}(hj)h]hstruct}(hjmhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjihhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj{hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjihhhjzhMubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkget_unbound_poolsbc.get_unbound_poolasbuh1hhjihhhjzhMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjihhhjzhMubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjihhhjzhMubjI)}(hget_unbound_poolh]jO)}(hjh]hget_unbound_pool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjihhhjzhMubj)}(h%(const struct workqueue_attrs *attrs)h]j)}(h#const struct workqueue_attrs *attrsh](j&)}(hjh]hconst}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_attrsh]hworkqueue_attrs}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj"modnameN classnameNjojr)}ju]jc.get_unbound_poolasbuh1hhjubj8)}(h h]h }(hj>hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjLhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hattrsh]hattrs}(hjYhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjihhhjzhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjehhhjzhMubah}(h]j`ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjzhMhjbhhubj{)}(hhh]h)}(h/get a worker_pool with the specified attributesh]h/get a worker_pool with the specified attributes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjbhhhjzhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``const struct workqueue_attrs *attrs`` the attributes of the worker_pool to get **Description** Obtain a worker_pool which has the same attributes as **attrs**, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one. Should be called with wq_pool_mutex held. **Return** On success, a worker_pool with the same attributes as **attrs**. On failure, ``NULL``.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(hQ``const struct workqueue_attrs *attrs`` the attributes of the worker_pool to get h](j^)}(h'``const struct workqueue_attrs *attrs``h]j)}(hjh]h#const struct workqueue_attrs *attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h(the attributes of the worker_pool to geth]h(the attributes of the worker_pool to get}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hObtain a worker_pool which has the same attributes as **attrs**, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one.h](h6Obtain a worker_pool which has the same attributes as }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h)Should be called with wq_pool_mutex held.h]h)Should be called with wq_pool_mutex held.}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjGh]hReturn}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hVOn success, a worker_pool with the same attributes as **attrs**. On failure, ``NULL``.h](h6On success, a worker_pool with the same attributes as }(hj]hhhNhNubj)}(h **attrs**h]hattrs}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubh. On failure, }(hj]hhhNhNubj)}(h``NULL``h]hNULL}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubh.}(hj]hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j wq_calc_pod_cpumask (C function)c.wq_calc_pod_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(hAvoid wq_calc_pod_cpumask (struct workqueue_attrs *attrs, int cpu)h]j )}(h@void wq_calc_pod_cpumask(struct workqueue_attrs *attrs, int cpu)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMQubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMQubjI)}(hwq_calc_pod_cpumaskh]jO)}(hwq_calc_pod_cpumaskh]hwq_calc_pod_cpumask}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMQubj)}(h((struct workqueue_attrs *attrs, int cpu)h](j)}(hstruct workqueue_attrs *attrsh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_attrsh]hworkqueue_attrs}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj modnameN classnameNjojr)}ju]jx)}jkjsbc.wq_calc_pod_cpumaskasbuh1hhjubj8)}(h h]h }(hj+hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hattrsh]hattrs}(hjFhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint cpuh](j)}(hinth]hint}(hj_hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj[ubj8)}(h h]h }(hjmhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj[ubjO)}(hcpuh]hcpu}(hj{hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj[ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMQubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMQubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMQhjhhubj{)}(hhh]h)}(h'calculate a wq_attrs' cpumask for a podh]h)calculate a wq_attrs’ cpumask for a pod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMDhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMQubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hXK**Parameters** ``struct workqueue_attrs *attrs`` the wq_attrs of the default pwq of the target workqueue ``int cpu`` the target CPU **Description** Calculate the cpumask a workqueue with **attrs** should use on **pod**. The result is stored in **attrs->__pod_cpumask**. If pod affinity is not enabled, **attrs->cpumask** is always used. If enabled and **pod** has online CPUs requested by **attrs**, the returned cpumask is the intersection of the possible CPUs of **pod** and **attrs->cpumask**. The caller is responsible for ensuring that the cpumask of **pod** stays stable.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMHhjubjS)}(hhh](jX)}(hZ``struct workqueue_attrs *attrs`` the wq_attrs of the default pwq of the target workqueue h](j^)}(h!``struct workqueue_attrs *attrs``h]j)}(hjh]hstruct workqueue_attrs *attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMEhjubjw)}(hhh]h)}(h7the wq_attrs of the default pwq of the target workqueueh]h7the wq_attrs of the default pwq of the target workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMEhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMEhjubjX)}(h``int cpu`` the target CPU h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMFhjubjw)}(hhh]h)}(hthe target CPUh]hthe target CPU}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hMFhj5ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj4hMFhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjZh]h Description}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMHhjubh)}(hyCalculate the cpumask a workqueue with **attrs** should use on **pod**. The result is stored in **attrs->__pod_cpumask**.h](h'Calculate the cpumask a workqueue with }(hjphhhNhNubj)}(h **attrs**h]hattrs}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh should use on }(hjphhhNhNubj)}(h**pod**h]hpod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh. The result is stored in }(hjphhhNhNubj)}(h**attrs->__pod_cpumask**h]hattrs->__pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh.}(hjphhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMHhjubh)}(hIf pod affinity is not enabled, **attrs->cpumask** is always used. If enabled and **pod** has online CPUs requested by **attrs**, the returned cpumask is the intersection of the possible CPUs of **pod** and **attrs->cpumask**.h](h If pod affinity is not enabled, }(hjhhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is always used. If enabled and }(hjhhhNhNubj)}(h**pod**h]hpod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh has online CPUs requested by }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhC, the returned cpumask is the intersection of the possible CPUs of }(hjhhhNhNubj)}(h**pod**h]hpod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMKhjubh)}(hPThe caller is responsible for ensuring that the cpumask of **pod** stays stable.h](h;The caller is responsible for ensuring that the cpumask of }(hjhhhNhNubj)}(h**pod**h]hpod}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh stays stable.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMOhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"apply_workqueue_attrs (C function)c.apply_workqueue_attrshNtauh1jhjhhhNhNubj)}(hhh](j)}(h\int apply_workqueue_attrs (struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h]j )}(h[int apply_workqueue_attrs(struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h](j)}(hinth]hint}(hj_hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj[hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjnhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj[hhhjmhMubjI)}(happly_workqueue_attrsh]jO)}(happly_workqueue_attrsh]happly_workqueue_attrs}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj|ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj[hhhjmhMubj)}(hB(struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.apply_workqueue_attrsasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(h#const struct workqueue_attrs *attrsh](j&)}(hjh]hconst}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubj&)}(hj)h]hstruct}(hj)hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj ubj8)}(h h]h }(hj6hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubh)}(hhh]jO)}(hworkqueue_attrsh]hworkqueue_attrs}(hjGhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjDubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjImodnameN classnameNjojr)}ju]jc.apply_workqueue_attrsasbuh1hhj ubj8)}(h h]h }(hjehhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubj)}(hjah]h*}(hjshhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubjO)}(hattrsh]hattrs}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhj[hhhjmhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjWhhhjmhMubah}(h]jRah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjmhMhjThhubj{)}(hhh]h)}(h1apply new workqueue_attrs to an unbound workqueueh]h1apply new workqueue_attrs to an unbound workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjThhhjmhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` the target workqueue ``const struct workqueue_attrs *attrs`` the workqueue_attrs to apply, allocated with alloc_workqueue_attrs() **Description** Apply **attrs** to an unbound workqueue **wq**. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in **attrs->cpumask** so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq. Performs GFP_KERNEL allocations. **Return** 0 on success and -errno on failure.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h5``struct workqueue_struct *wq`` the target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubjX)}(hm``const struct workqueue_attrs *attrs`` the workqueue_attrs to apply, allocated with alloc_workqueue_attrs() h](j^)}(h'``const struct workqueue_attrs *attrs``h]j)}(hj$h]h#const struct workqueue_attrs *attrs}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hDthe workqueue_attrs to apply, allocated with alloc_workqueue_attrs()h]hDthe workqueue_attrs to apply, allocated with alloc_workqueue_attrs()}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj9hMhj:ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj9hMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj_h]h Description}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hXqApply **attrs** to an unbound workqueue **wq**. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in **attrs->cpumask** so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq.h](hApply }(hjuhhhNhNubj)}(h **attrs**h]hattrs}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjuubh to an unbound workqueue }(hjuhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjuubh\. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in }(hjuhhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjuubh so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq.}(hjuhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h Performs GFP_KERNEL allocations.h]h Performs GFP_KERNEL allocations.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubh)}(h#0 on success and -errno on failure.h]h#0 on success and -errno on failure.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"unbound_wq_update_pwq (C function)c.unbound_wq_update_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(hAvoid unbound_wq_update_pwq (struct workqueue_struct *wq, int cpu)h]j )}(h@void unbound_wq_update_pwq(struct workqueue_struct *wq, int cpu)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM-ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj hhhjhM-ubjI)}(hunbound_wq_update_pwqh]jO)}(hunbound_wq_update_pwqh]hunbound_wq_update_pwq}(hj1hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj-ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj hhhjhM-ubj)}(h&(struct workqueue_struct *wq, int cpu)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjMhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjIubj8)}(h h]h }(hjZhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjIubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjkhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjhubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmmodnameN classnameNjojr)}ju]jx)}jkj3sbc.unbound_wq_update_pwqasbuh1hhjIubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjIubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubjO)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjIubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubj)}(hint cpuh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubeh}(h]h ]h"]h$]h&]jjuh1jhj hhhjhM-ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhM-ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM-hjhhubj{)}(hhh]h)}(h%update a pwq slot for CPU hot[un]plugh]h%update a pwq slot for CPU hot[un]plug}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhM-ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hXj**Parameters** ``struct workqueue_struct *wq`` the target workqueue ``int cpu`` the CPU to update the pwq slot for **Description** This function is to be called from ``CPU_DOWN_PREPARE``, ``CPU_ONLINE`` and ``CPU_DOWN_FAILED``. **cpu** is in the same pod of the CPU being hot[un]plugged. If pod affinity can't be adjusted due to memory allocation failure, it falls back to **wq->dfl_pwq** which may not be optimal but is always correct. Note that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it's the user's responsibility to flush the work item from CPU_DOWN_PREPARE.h](h)}(h**Parameters**h]j)}(hj'h]h Parameters}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj!ubjS)}(hhh](jX)}(h5``struct workqueue_struct *wq`` the target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjFh]hstruct workqueue_struct *wq}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj@ubjw)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj[hMhj\ubah}(h]h ]h"]h$]h&]uh1jvhj@ubeh}(h]h ]h"]h$]h&]uh1jWhj[hMhj=ubjX)}(h/``int cpu`` the CPU to update the pwq slot for h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjyubjw)}(hhh]h)}(h"the CPU to update the pwq slot forh]h"the CPU to update the pwq slot for}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjyubeh}(h]h ]h"]h$]h&]uh1jWhjhMhj=ubeh}(h]h ]h"]h$]h&]uh1jRhj!ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj!ubh)}(hThis function is to be called from ``CPU_DOWN_PREPARE``, ``CPU_ONLINE`` and ``CPU_DOWN_FAILED``. **cpu** is in the same pod of the CPU being hot[un]plugged.h](h#This function is to be called from }(hjhhhNhNubj)}(h``CPU_DOWN_PREPARE``h]hCPU_DOWN_PREPARE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h``CPU_ONLINE``h]h CPU_ONLINE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h``CPU_DOWN_FAILED``h]hCPU_DOWN_FAILED}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. }(hjhhhNhNubj)}(h**cpu**h]hcpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh4 is in the same pod of the CPU being hot[un]plugged.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj!ubh)}(hIf pod affinity can't be adjusted due to memory allocation failure, it falls back to **wq->dfl_pwq** which may not be optimal but is always correct.h](hWIf pod affinity can’t be adjusted due to memory allocation failure, it falls back to }(hj'hhhNhNubj)}(h**wq->dfl_pwq**h]h wq->dfl_pwq}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj'ubh0 which may not be optimal but is always correct.}(hj'hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM#hj!ubh)}(hXNote that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it's the user's responsibility to flush the work item from CPU_DOWN_PREPARE.h]hXNote that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it’s the user’s responsibility to flush the work item from CPU_DOWN_PREPARE.}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM&hj!ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!wq_adjust_max_active (C function)c.wq_adjust_max_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(h7void wq_adjust_max_active (struct workqueue_struct *wq)h]j )}(h6void wq_adjust_max_active(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjwhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjshhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjshhhjhMubjI)}(hwq_adjust_max_activeh]jO)}(hwq_adjust_max_activeh]hwq_adjust_max_active}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjshhhjhMubj)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.wq_adjust_max_activeasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hwqh]hwq}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjshhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjohhhjhMubah}(h]jjah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjlhhubj{)}(hhh]h)}(h/update a wq's max_active to the current settingh]h1update a wq’s max_active to the current setting}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj4hhubah}(h]h ]h"]h$]h&]uh1jzhjlhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjOjjOjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` target workqueue **Description** If **wq** isn't freezing, set **wq->max_active** to the saved_max_active and activate inactive work items accordingly. If **wq** is freezing, clear **wq->max_active** to zero.h](h)}(h**Parameters**h]j)}(hjYh]h Parameters}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjWubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjSubjS)}(hhh]jX)}(h1``struct workqueue_struct *wq`` target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjxh]hstruct workqueue_struct *wq}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjrubjw)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjrubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjoubah}(h]h ]h"]h$]h&]uh1jRhjSubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjSubh)}(hIf **wq** isn't freezing, set **wq->max_active** to the saved_max_active and activate inactive work items accordingly. If **wq** is freezing, clear **wq->max_active** to zero.h](hIf }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh isn’t freezing, set }(hjhhhNhNubj)}(h**wq->max_active**h]hwq->max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhJ to the saved_max_active and activate inactive work items accordingly. If }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is freezing, clear }(hjhhhNhNubj)}(h**wq->max_active**h]hwq->max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to zero.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjSubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdestroy_workqueue (C function)c.destroy_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void destroy_workqueue (struct workqueue_struct *wq)h]j )}(h3void destroy_workqueue(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hj@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjOhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj<hhhjNhMubjI)}(hdestroy_workqueueh]jO)}(hdestroy_workqueueh]hdestroy_workqueue}(hjahhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj]ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj<hhhjNhMubj)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj}hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjyubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjyubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjcsbc.destroy_workqueueasbuh1hhjyubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjyubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjyubjO)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjyubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjuubah}(h]h ]h"]h$]h&]jjuh1jhj<hhhjNhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj8hhhjNhMubah}(h]j3ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjNhMhj5hhubj{)}(hhh]h)}(hsafely terminate a workqueueh]hsafely terminate a workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhj5hhhjNhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct workqueue_struct *wq`` target workqueue **Description** Safely destroy a workqueue. All work currently pending will be done first.h](h)}(h**Parameters**h]j)}(hj"h]h Parameters}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h1``struct workqueue_struct *wq`` target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjAh]hstruct workqueue_struct *wq}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj;ubjw)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjVhMhjWubah}(h]h ]h"]h$]h&]uh1jvhj;ubeh}(h]h ]h"]h$]h&]uh1jWhjVhMhj8ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj|h]h Description}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjzubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hJSafely destroy a workqueue. All work currently pending will be done first.h]hJSafely destroy a workqueue. All work currently pending will be done first.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%workqueue_set_max_active (C function)c.workqueue_set_max_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(hKvoid workqueue_set_max_active (struct workqueue_struct *wq, int max_active)h]j )}(hJvoid workqueue_set_max_active(struct workqueue_struct *wq, int max_active)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM-ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhM-ubjI)}(hworkqueue_set_max_activeh]jO)}(hworkqueue_set_max_activeh]hworkqueue_set_max_active}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhM-ubj)}(h-(struct workqueue_struct *wq, int max_active)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hj hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.workqueue_set_max_activeasbuh1hhjubj8)}(h h]h }(hj<hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hwqh]hwq}(hjWhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint max_activeh](j)}(hinth]hint}(hjphhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjlubj8)}(h h]h }(hj~hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjlubjO)}(h max_activeh]h max_active}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjlubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhM-ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhM-ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM-hjhhubj{)}(hhh]h)}(h adjust max_active of a workqueueh]h adjust max_active of a workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM#hjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhM-ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` target workqueue ``int max_active`` new max_active value. **Description** Set max_active of **wq** to **max_active**. See the alloc_workqueue() function comment. **Context** Don't call from IRQ context.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM'hjubjS)}(hhh](jX)}(h1``struct workqueue_struct *wq`` target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM$hjubjw)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hM$hj ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj hM$hjubjX)}(h)``int max_active`` new max_active value. h](j^)}(h``int max_active``h]j)}(hj0h]hint max_active}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM%hj*ubjw)}(hhh]h)}(hnew max_active value.h]hnew max_active value.}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjEhM%hjFubah}(h]h ]h"]h$]h&]uh1jvhj*ubeh}(h]h ]h"]h$]h&]uh1jWhjEhM%hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjkh]h Description}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjiubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM'hjubh)}(hWSet max_active of **wq** to **max_active**. See the alloc_workqueue() function comment.h](hSet max_active of }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh-. See the alloc_workqueue() function comment.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM'hjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM*hjubh)}(hDon't call from IRQ context.h]hDon’t call from IRQ context.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM*hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%workqueue_set_min_active (C function)c.workqueue_set_min_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(hKvoid workqueue_set_min_active (struct workqueue_struct *wq, int min_active)h]j )}(hJvoid workqueue_set_min_active(struct workqueue_struct *wq, int min_active)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMRubj8)}(h h]h }(hj hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhj hMRubjI)}(hworkqueue_set_min_activeh]jO)}(hworkqueue_set_min_activeh]hworkqueue_set_min_active}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhj hMRubj)}(h-(struct workqueue_struct *wq, int min_active)h](j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj8hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj4ubj8)}(h h]h }(hjEhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj4ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjVhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjSubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjXmodnameN classnameNjojr)}ju]jx)}jkjsbc.workqueue_set_min_activeasbuh1hhj4ubj8)}(h h]h }(hjvhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj4ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj4ubjO)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj4ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj0ubj)}(hint min_activeh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(h min_activeh]h min_active}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj0ubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhj hMRubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhj hMRubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj hMRhjhhubj{)}(hhh]h)}(h)adjust min_active of an unbound workqueueh]h)adjust min_active of an unbound workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMEhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhj hMRubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX(**Parameters** ``struct workqueue_struct *wq`` target unbound workqueue ``int min_active`` new min_active value **Description** Set min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is ``WQ_DFL_MIN_ACTIVE`` by default. Use this function to adjust the min_active value between 0 and the current max_active.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMIhj ubjS)}(hhh](jX)}(h9``struct workqueue_struct *wq`` target unbound workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj1h]hstruct workqueue_struct *wq}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMFhj+ubjw)}(hhh]h)}(htarget unbound workqueueh]htarget unbound workqueue}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhMFhjGubah}(h]h ]h"]h$]h&]uh1jvhj+ubeh}(h]h ]h"]h$]h&]uh1jWhjFhMFhj(ubjX)}(h(``int min_active`` new min_active value h](j^)}(h``int min_active``h]j)}(hjjh]hint min_active}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMGhjdubjw)}(hhh]h)}(hnew min_active valueh]hnew min_active value}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMGhjubah}(h]h ]h"]h$]h&]uh1jvhjdubeh}(h]h ]h"]h$]h&]uh1jWhjhMGhj(ubeh}(h]h ]h"]h$]h&]uh1jRhj ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMIhj ubh)}(hXHSet min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is ``WQ_DFL_MIN_ACTIVE`` by default.h](hX'Set min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is }(hjhhhNhNubj)}(h``WQ_DFL_MIN_ACTIVE``h]hWQ_DFL_MIN_ACTIVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh by default.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMIhj ubh)}(hVUse this function to adjust the min_active value between 0 and the current max_active.h]hVUse this function to adjust the min_active value between 0 and the current max_active.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMOhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jcurrent_work (C function)c.current_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h(struct work_struct * current_work (void)h]j )}(h&struct work_struct *current_work(void)h](j&)}(hj)h]hstruct}(hj hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMgubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMgubh)}(hhh]jO)}(h work_structh]h work_struct}(hj*hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj'ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj,modnameN classnameNjojr)}ju]jx)}jk current_worksbc.current_workasbuh1hhjhhhjhMgubj8)}(h h]h }(hjKhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMgubj)}(hjah]h*}(hjYhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMgubjI)}(h current_workh]jO)}(hjHh]h current_work}(hjjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjfubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMgubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhj}ubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMgubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMgubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMghjhhubj{)}(hhh]h)}(h'retrieve ``current`` task's work structh](h retrieve }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh task’s work struct}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM`hjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMgubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX'**Parameters** ``void`` no arguments **Description** Determine if ``current`` task is a workqueue worker and what it's working on. Useful to find out the context that the ``current`` task is running in. **Return** work struct if ``current`` task is a workqueue worker, ``NULL`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMdhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMghjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMghjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMghjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj=h]h Description}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMihjubh)}(hDetermine if ``current`` task is a workqueue worker and what it's working on. Useful to find out the context that the ``current`` task is running in.h](h Determine if }(hjShhhNhNubj)}(h ``current``h]hcurrent}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh` task is a workqueue worker and what it’s working on. Useful to find out the context that the }(hjShhhNhNubj)}(h ``current``h]hcurrent}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh task is running in.}(hjShhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMahjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMdhjubh)}(hJwork struct if ``current`` task is a workqueue worker, ``NULL`` otherwise.h](hwork struct if }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh task is a workqueue worker, }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMehjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j)current_is_workqueue_rescuer (C function)c.current_is_workqueue_rescuerhNtauh1jhjhhhNhNubj)}(hhh](j)}(h(bool current_is_workqueue_rescuer (void)h]j )}(h'bool current_is_workqueue_rescuer(void)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMwubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMwubjI)}(hcurrent_is_workqueue_rescuerh]jO)}(hcurrent_is_workqueue_rescuerh]hcurrent_is_workqueue_rescuer}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMwubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]noemphjjuh1jhj%ubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMwubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMwubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMwhjhhubj{)}(hhh]h)}(h!is ``current`` workqueue rescuer?h](his }(hjWhhhNhNubj)}(h ``current``h]hcurrent}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjWubh workqueue rescuer?}(hjWhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMphjThhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMwubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``void`` no arguments **Description** Determine whether ``current`` is a workqueue rescuer. Can be used from work functions to determine whether it's being run off the rescuer task. **Return** ``true`` if ``current`` is a workqueue rescuer. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMthjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMwhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMwhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMwhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMyhjubh)}(hDetermine whether ``current`` is a workqueue rescuer. Can be used from work functions to determine whether it's being run off the rescuer task.h](hDetermine whether }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhu is a workqueue rescuer. Can be used from work functions to determine whether it’s being run off the rescuer task.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMqhjubh)}(h **Return**h]j)}(hjh]hReturn}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMthjubh)}(hD``true`` if ``current`` is a workqueue rescuer. ``false`` otherwise.h](j)}(h``true``h]htrue}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubh if }(hj4hhhNhNubj)}(h ``current``h]hcurrent}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubh is a workqueue rescuer. }(hj4hhhNhNubj)}(h ``false``h]hfalse}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubh otherwise.}(hj4hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMuhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j workqueue_congested (C function)c.workqueue_congestedhNtauh1jhjhhhNhNubj)}(hhh](j)}(h?bool workqueue_congested (int cpu, struct workqueue_struct *wq)h]j )}(h>bool workqueue_congested(int cpu, struct workqueue_struct *wq)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hworkqueue_congestedh]jO)}(hworkqueue_congestedh]hworkqueue_congested}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h&(int cpu, struct workqueue_struct *wq)h](j)}(hint cpuh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj$hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj!ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj&modnameN classnameNjojr)}ju]jx)}jkjsbc.workqueue_congestedasbuh1hhjubj8)}(h h]h }(hjDhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hwqh]hwq}(hj_hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h%test whether a workqueue is congestedh]h%test whether a workqueue is congested}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``int cpu`` CPU in question ``struct workqueue_struct *wq`` target workqueue **Description** Test whether **wq**'s cpu workqueue for **cpu** is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging. If **cpu** is WORK_CPU_UNBOUND, the test is performed on the local CPU. With the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn't mean that the workqueue is contested on any other CPUs. **Return** ``true`` if congested, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h``int cpu`` CPU in question h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hCPU in questionh]hCPU in question}(hjhhhNhNuba)h}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubjX)}(h1``struct workqueue_struct *wq`` target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj>h]h Description}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hTest whether **wq**'s cpu workqueue for **cpu** is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.h](h Test whether }(hjThhhNhNubj)}(h**wq**h]hwq}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubh’s cpu workqueue for }(hjThhhNhNubj)}(h**cpu**h]hcpu}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubh is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.}(hjThhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hGIf **cpu** is WORK_CPU_UNBOUND, the test is performed on the local CPU.h](hIf }(hjhhhNhNubj)}(h**cpu**h]hcpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh= is WORK_CPU_UNBOUND, the test is performed on the local CPU.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hWith the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn't mean that the workqueue is contested on any other CPUs.h]hWith the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn’t mean that the workqueue is contested on any other CPUs.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h+``true`` if congested, ``false`` otherwise.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if congested, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwork_busy (C function) c.work_busyhNtauh1jhjhhhNhNubj)}(hhh](j)}(h1unsigned int work_busy (struct work_struct *work)h]j )}(h0unsigned int work_busy(struct work_struct *work)h](j)}(hunsignedh]hunsigned}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj-hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhj,hMubj)}(hinth]hint}(hj;hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj,hMubj8)}(h h]h }(hjIhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhj,hMubjI)}(h work_busyh]jO)}(h work_busyh]h work_busy}(hj[hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjWubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhj,hMubj)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j&)}(hj)h]hstruct}(hjwhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjsubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjsubh)}(hhh]jO)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkj]sb c.work_busyasbuh1hhjsubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjsubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsubjO)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjsubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjoubah}(h]h ]h"]h$]h&]jjuh1jhjhhhj,hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhj,hMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj,hMhjhhubj{)}(hhh]h)}(h3test whether a work is currently pending or runningh]h3test whether a work is currently pending or running}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhj,hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hXD**Parameters** ``struct work_struct *work`` the work to be tested **Description** Test whether **work** is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging. **Return** OR'd bitmask of WORK_BUSY_* bits.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h3``struct work_struct *work`` the work to be tested h](j^)}(h``struct work_struct *work``h]j)}(hj;h]hstruct work_struct *work}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj5ubjw)}(hhh]h)}(hthe work to be testedh]hthe work to be tested}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhjPhMhjQubah}(h]h ]h"]h$]h&]uh1jvhj5ubeh}(h]h ]h"]h$]h&]uh1jWhjPhMhj2ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjvh]h Description}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hTest whether **work** is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.h](h Test whether }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h!OR'd bitmask of WORK_BUSY_* bits.h]h#OR’d bitmask of WORK_BUSY_* bits.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jset_worker_desc (C function)c.set_worker_deschNtauh1jhjhhhNhNubj)}(hhh](j)}(h+void set_worker_desc (const char *fmt, ...)h]j )}(h*void set_worker_desc(const char *fmt, ...)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hset_worker_desch]jO)}(hset_worker_desch]hset_worker_desc}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(const char *fmt, ...)h](j)}(hconst char *fmth](j&)}(hjh]hconst}(hj1hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj-ubj8)}(h h]h }(hj>hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj-ubj)}(hcharh]hchar}(hjLhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubj8)}(h h]h }(hjZhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj-ubj)}(hjah]h*}(hjhhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubjO)}(hfmth]hfmt}(hjuhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj-ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj)ubj)}(h...h]j)}(hjh]h...}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhj)ubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h)set description for the current work itemh]h)set description for the current work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``const char *fmt`` printf-style format string ``...`` arguments for the format string **Description** This function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing '\0'.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h/``const char *fmt`` printf-style format string h](j^)}(h``const char *fmt``h]j)}(hjh]hconst char *fmt}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hprintf-style format stringh]hprintf-style format string}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj hMhjubjX)}(h(``...`` arguments for the format string h](j^)}(h``...``h]j)}(hj1h]h...}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj+ubjw)}(hhh]h)}(harguments for the format stringh]harguments for the format string}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhMhjGubah}(h]h ]h"]h$]h&]uh1jvhj+ubeh}(h]h ]h"]h$]h&]uh1jWhjFhMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjlh]h Description}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hXThis function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing '\0'.h]hXThis function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing ‘0’.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jprint_worker_info (C function)c.print_worker_infohNtauh1jhjhhhNhNubj)}(hhh](j)}(hFvoid print_worker_info (const char *log_lvl, struct task_struct *task)h]j )}(hEvoid print_worker_info(const char *log_lvl, struct task_struct *task)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hprint_worker_infoh]jO)}(hprint_worker_infoh]hprint_worker_info}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h/(const char *log_lvl, struct task_struct *task)h](j)}(hconst char *log_lvlh](j&)}(hjh]hconst}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hcharh]hchar}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hlog_lvlh]hlog_lvl}(hj2hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct task_struct *taskh](j&)}(hj)h]hstruct}(hjKhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjGubj8)}(h h]h }(hjXhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjGubh)}(hhh]jO)}(h task_structh]h task_struct}(hjihhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjfubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjkmodnameN classnameNjojr)}ju]jx)}jkjsbc.print_worker_infoasbuh1hhjGubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjGubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubjO)}(htaskh]htask}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjGubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h,print out worker information and descriptionh]h,print out worker information and description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``const char *log_lvl`` the log level to use when printing ``struct task_struct *task`` target task **Description** If **task** is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item. This function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn't synchronized and may print out mixups or garbages of limited length.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h;``const char *log_lvl`` the log level to use when printing h](j^)}(h``const char *log_lvl``h]j)}(hjh]hconst char *log_lvl}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubjw)}(hhh]h)}(h"the log level to use when printingh]h"the log level to use when printing}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj$hMhj%ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhj$hMhjubjX)}(h)``struct task_struct *task`` target task h](j^)}(h``struct task_struct *task``h]j)}(hjHh]hstruct task_struct *task}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjBubjw)}(hhh]h)}(h target taskh]h target task}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhj]hMhj^ubah}(h]h ]h"]h$]h&]uh1jvhjBubeh}(h]h ]h"]h$]h&]uh1jWhj]hMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hIf **task** is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item.h](hIf }(hjhhhNhNubj)}(h**task**h]htask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hThis function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn't synchronized and may print out mixups or garbages of limited length.h]hThis function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn’t synchronized and may print out mixups or garbages of limited length.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jshow_one_workqueue (C function)c.show_one_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h5void show_one_workqueue (struct workqueue_struct *wq)h]j )}(h4void show_one_workqueue(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hshow_one_workqueueh]jO)}(hshow_one_workqueueh]hshow_one_workqueue}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj&hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj"ubj8)}(h h]h }(hj3hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj"ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hjDhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjAubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjFmodnameN classnameNjojr)}ju]jx)}jkj sbc.show_one_workqueueasbuh1hhj"ubj8)}(h h]h }(hjdhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj"ubj)}(hjah]h*}(hjrhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubjO)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj"ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h!dump state of specified workqueueh]h!dump state of specified workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hW**Parameters** ``struct workqueue_struct *wq`` workqueue whose state will be printedh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(hE``struct workqueue_struct *wq`` workqueue whose state will be printedh](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h%workqueue whose state will be printedh]h%workqueue whose state will be printed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!show_one_worker_pool (C function)c.show_one_worker_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void show_one_worker_pool (struct worker_pool *pool)h]j )}(h3void show_one_worker_pool(struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjShhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj@hhhjRhMubjI)}(hshow_one_worker_poolh]jO)}(hshow_one_worker_poolh]hshow_one_worker_pool}(hjehhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjaubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj@hhhjRhMubj)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj}ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj}ubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjgsbc.show_one_worker_poolasbuh1hhj}ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj}ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj}ubjO)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj}ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjyubah}(h]h ]h"]h$]h&]jjuh1jhj@hhhjRhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj<hhhjRhMubah}(h]j7ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjRhMhj9hhubj{)}(hhh]h)}(h#dump state of specified worker poolh]h#dump state of specified worker pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhj9hhhjRhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hV**Parameters** ``struct worker_pool *pool`` worker pool whose state will be printedh](h)}(h**Parameters**h]j)}(hj&h]h Parameters}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubjS)}(hhh]jX)}(hD``struct worker_pool *pool`` worker pool whose state will be printedh](j^)}(h``struct worker_pool *pool``h]j)}(hjEh]hstruct worker_pool *pool}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj?ubjw)}(hhh]h)}(h'worker pool whose state will be printedh]h'worker pool whose state will be printed}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj[ubah}(h]h ]h"]h$]h&]uh1jvhj?ubeh}(h]h ]h"]h$]h&]uh1jWhjZhMhj<ubah}(h]h ]h"]h$]h&]uh1jRhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j show_all_workqueues (C function)c.show_all_workqueueshNtauh1jhjhhhNhNubj)}(hhh](j)}(hvoid show_all_workqueues (void)h]j )}(hvoid show_all_workqueues(void)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hshow_all_workqueuesh]jO)}(hshow_all_workqueuesh]hshow_all_workqueues}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(hdump workqueue stateh]hdump workqueue state}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(h**Parameters** ``void`` no arguments **Description** Called from a sysrq handler and prints out all busy workqueues and pools.h](h)}(h**Parameters**h]j)}(hj(h]h Parameters}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj"ubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjGh]hvoid}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjAubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj\hMhj]ubah}(h]h ]h"]h$]h&]uh1jvhjAubeh}(h]h ]h"]h$]h&]uh1jWhj\hMhj>ubah}(h]h ]h"]h$]h&]uh1jRhj"ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj"ubh)}(hICalled from a sysrq handler and prints out all busy workqueues and pools.h]hICalled from a sysrq handler and prints out all busy workqueues and pools.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj"ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&show_freezable_workqueues (C function)c.show_freezable_workqueueshNtauh1jhjhhhNhNubj)}(hhh](j)}(h%void show_freezable_workqueues (void)h]j )}(h$void show_freezable_workqueues(void)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hshow_freezable_workqueuesh]jO)}(hshow_freezable_workqueuesh]hshow_freezable_workqueues}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(hdump freezable workqueue stateh]hdump freezable workqueue state}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj+hhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjFjjFjjjuh1jhhhjhNhNubj)}(h**Parameters** ``void`` no arguments **Description** Called from try_to_freeze_tasks() and prints out all freezable workqueues still busy.h](h)}(h**Parameters**h]j)}(hjPh]h Parameters}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjJubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjoh]hvoid}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjiubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjiubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjfubah}(h]h ]h"]h$]h&]uh1jRhjJubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjJubh)}(hUCalled from try_to_freeze_tasks() and prints out all freezable workqueues still busy.h]hUCalled from try_to_freeze_tasks() and prints out all freezable workqueues still busy.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjJubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jrebind_workers (C function)c.rebind_workershNtauh1jhjhhhNhNubj)}(hhh](j)}(h.void rebind_workers (struct worker_pool *pool)h]j )}(h-void rebind_workers(struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hrebind_workersh]jO)}(hrebind_workersh]hrebind_workers}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hj,hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj(ubj8)}(h h]h }(hj9hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj(ubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hjJhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjGubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjLmodnameN classnameNjojr)}ju]jx)}jkjsbc.rebind_workersasbuh1hhj(ubj8)}(h h]h }(hjjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj(ubj)}(hjah]h*}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(ubjO)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj(ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj$ubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h2rebind all workers of a pool to the associated CPUh]h2rebind all workers of a pool to the associated CPU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(h**Parameters** ``struct worker_pool *pool`` pool of interest **Description** **pool->cpu** is coming online. Rebind all workers to the CPU.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h.``struct worker_pool *pool`` pool of interest h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hpool of interesth]hpool of interest}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj+h]h Description}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h?**pool->cpu** is coming online. Rebind all workers to the CPU.h](j)}(h **pool->cpu**h]h pool->cpu}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubh2 is coming online. Rebind all workers to the CPU.}(hjAhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j,restore_unbound_workers_cpumask (C function)!c.restore_unbound_workers_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(hHvoid restore_unbound_workers_cpumask (struct worker_pool *pool, int cpu)h]j )}(hGvoid restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)h](j)}(hvoidh]hvoid}(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjzhhhjhMubjI)}(hrestore_unbound_workers_cpumaskh]jO)}(hrestore_unbound_workers_cpumaskh]hrestore_unbound_workers_cpumask}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjzhhhjhMubj)}(h#(struct worker_pool *pool, int cpu)h](j)}(hstruct worker_pool *poolh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsb!c.restore_unbound_workers_cpumaskasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint cpuh](j)}(hinth]hint}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubj8)}(h h]h }(hj;hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj)ubjO)}(hcpuh]hcpu}(hjIhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj)ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjzhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjvhhhjhMubah}(h]jqah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjshhubj{)}(hhh]h)}(h"restore cpumask of unbound workersh]h"restore cpumask of unbound workers}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjphhubah}(h]h ]h"]h$]h&]uh1jzhjshhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct worker_pool *pool`` unbound pool of interest ``int cpu`` the CPU which is coming up **Description** An unbound pool may end up with a cpumask which doesn't have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If **cpu** is in **pool**'s cpumask which didn't have any online CPU before, cpus_allowed of all its workers should be restored.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h6``struct worker_pool *pool`` unbound pool of interest h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hunbound pool of interesth]hunbound pool of interest}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubjX)}(h'``int cpu`` the CPU which is coming up h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe CPU which is coming uph]hthe CPU which is coming up}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj(h]h Description}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hX!An unbound pool may end up with a cpumask which doesn't have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If **cpu** is in **pool**'s cpumask which didn't have any online CPU before, cpus_allowed of all its workers should be restored.h](hAn unbound pool may end up with a cpumask which doesn’t have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If }(hj>hhhNhNubj)}(h**cpu**h]hcpu}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubh is in }(hj>hhhNhNubj)}(h**pool**h]hpool}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubhk’s cpumask which didn’t have any online CPU before, cpus_allowed of all its workers should be restored.}(hj>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwork_on_cpu_key (C function)c.work_on_cpu_keyhNtauh1jhjhhhNhNubj)}(hhh](j)}(hYlong work_on_cpu_key (int cpu, long (*fn)(void *), void *arg, struct lock_class_key *key)h]j )}(hWlong work_on_cpu_key(int cpu, long (*fn)(void*), void *arg, struct lock_class_key *key)h](j)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM\ubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhM\ubjI)}(hwork_on_cpu_keyh]jO)}(hwork_on_cpu_keyh]hwork_on_cpu_key}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhM\ubj)}(hC(int cpu, long (*fn)(void*), void *arg, struct lock_class_key *key)h](j)}(hint cpuh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hlong (*fn)(void*)h](j)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(h(h]h(}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hfnh]hfn}(hj:hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubj)}(h)h]h)}(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hj!h]h(}(hjVhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hvoidh]hvoid}(hjchhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjJh]h)}(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(h void *argh](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hargh]harg}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct lock_class_key *keyh](j&)}(hj)h]hstruct}(hjhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubh)}(hhh]jO)}(hlock_class_keyh]hlock_class_key}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsbc.work_on_cpu_keyasbuh1hhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hkeyh]hkey}(hj1hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhjhM\ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhM\ubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhM\hjhhubj{)}(hhh]h)}(h4run a function in thread context on a particular cpuh]h4run a function in thread context on a particular cpu}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMQhjXhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhM\ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjsjjsjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``int cpu`` the cpu to run on ``long (*fn)(void *)`` the function to run ``void *arg`` the function arg ``struct lock_class_key *key`` The lock class key for lock debugging purposes **Description** It is up to the caller to ensure that the cpu doesn't go offline. The caller must not hold any locks which would prevent **fn** from completing. **Return** The value **fn** returns.h](h)}(h**Parameters**h]j)}(hj}h]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMUhjwubjS)}(hhh](jX)}(h``int cpu`` the cpu to run on h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMRhjubjw)}(hhh]h)}(hthe cpu to run onh]hthe cpu to run on}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMRhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMRhjubjX)}(h+``long (*fn)(void *)`` the function to run h](j^)}(h``long (*fn)(void *)``h]j)}(hjh]hlong (*fn)(void *)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMShjubjw)}(hhh]h)}(hthe function to runh]hthe function to run}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMShjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMShjubjX)}(h``void *arg`` the function arg h](j^)}(h ``void *arg``h]j)}(hjh]h void *arg}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMThjubjw)}(hhh]h)}(hthe function argh]hthe function arg}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj#hMThj$ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj#hMThjubjX)}(hN``struct lock_class_key *key`` The lock class key for lock debugging purposes h](j^)}(h``struct lock_class_key *key``h]j)}(hjGh]hstruct lock_class_key *key}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMUhjAubjw)}(hhh]h)}(h.The lock class key for lock debugging purposesh]h.The lock class key for lock debugging purposes}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj\hMUhj]ubah}(h]h ]h"]h$]h&]uh1jvhjAubeh}(h]h ]h"]h$]h&]uh1jWhj\hMUhjubeh}(h]h ]h"]h$]h&]uh1jRhjwubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMWhjwubh)}(hIt is up to the caller to ensure that the cpu doesn't go offline. The caller must not hold any locks which would prevent **fn** from completing.h](h{It is up to the caller to ensure that the cpu doesn’t go offline. The caller must not hold any locks which would prevent }(hjhhhNhNubj)}(h**fn**h]hfn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh from completing.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMWhjwubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMZhjwubh)}(hThe value **fn** returns.h](h The value }(hjhhhNhNubj)}(h**fn**h]hfn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh returns.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMZhjwubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!work_on_cpu_safe_key (C function)c.work_on_cpu_safe_keyhNtauh1jhjhhhNhNubj)}(hhh](j)}(h^long work_on_cpu_safe_key (int cpu, long (*fn)(void *), void *arg, struct lock_class_key *key)h]j )}(h\long work_on_cpu_safe_key(int cpu, long (*fn)(void*), void *arg, struct lock_class_key *key)h](j)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMuubj8)}(h h]h }(hj!hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhj hMuubjI)}(hwork_on_cpu_safe_keyh]jO)}(hwork_on_cpu_safe_keyh]hwork_on_cpu_safe_key}(hj3hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj/ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhj hMuubj)}(hC(int cpu, long (*fn)(void*), void *arg, struct lock_class_key *key)h](j)}(hint cpuh](j)}(hinth]hint}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKubj8)}(h h]h }(hj]hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjKubjO)}(hcpuh]hcpu}(hjkhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjKubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubj)}(hlong (*fn)(void*)h](j)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hj!h]h(}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hfnh]hfn}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubj)}(hjJh]h)}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hj!h]h(}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjJh]h)}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubj)}(h void *argh](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj8)}(h h]h }(hj#hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubj)}(hjah]h*}(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjO)}(hargh]harg}(hj>hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubj)}(hstruct lock_class_key *keyh](j&)}(hj)h]hstruct}(hjWhhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hjSubj8)}(h h]h }(hjdhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjSubh)}(hhh]jO)}(hlock_class_keyh]hlock_class_key}(hjuhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjrubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjwmodnameN classnameNjojr)}ju]jx)}jkj5sbc.work_on_cpu_safe_keyasbuh1hhjSubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjSubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjSubjO)}(hkeyh]hkey}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjSubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubeh}(h]h ]h"]h$]h&]jjuh1jhjhhhj hMuubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj hhhj hMuubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj hMuhjhhubj{)}(hhh]h)}(h4run a function in thread context on a particular cpuh]h4run a function in thread context on a particular cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMjhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhj hMuubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``int cpu`` the cpu to run on ``long (*fn)(void *)`` the function to run ``void *arg`` the function argument ``struct lock_class_key *key`` The lock class key for lock debugging purposes **Description** Disables CPU hotplug and calls work_on_cpu(). The caller must not hold any locks which would prevent **fn** from completing. **Return** The value **fn** returns.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMnhjubjS)}(hhh](jX)}(h``int cpu`` the cpu to run on h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMkhjubjw)}(hhh]h)}(hthe cpu to run onh]hthe cpu to run on}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0hMkhj1ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj0hMkhjubjX)}(h+``long (*fn)(void *)`` the function to run h](j^)}(h``long (*fn)(void *)``h]j)}(hjTh]hlong (*fn)(void *)}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMlhjNubjw)}(hhh]h)}(hthe function to runh]hthe function to run}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjihMlhjjubah}(h]h ]h"]h$]h&]uh1jvhjNubeh}(h]h ]h"]h$]h&]uh1jWhjihMlhjubjX)}(h$``void *arg`` the function argument h](j^)}(h ``void *arg``h]j)}(hjh]h void *arg}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMmhjubjw)}(hhh]h)}(hthe function argumenth]hthe function argument}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMmhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMmhjubjX)}(hN``struct lock_class_key *key`` The lock class key for lock debugging purposes h](j^)}(h``struct lock_class_key *key``h]j)}(hjh]hstruct lock_class_key *key}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMnhjubjw)}(hhh]h)}(h.The lock class key for lock debugging purposesh]h.The lock class key for lock debugging purposes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMnhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMnhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMphjubh)}(h|Disables CPU hotplug and calls work_on_cpu(). The caller must not hold any locks which would prevent **fn** from completing.h](heDisables CPU hotplug and calls work_on_cpu(). The caller must not hold any locks which would prevent }(hjhhhNhNubj)}(h**fn**h]hfn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh from completing.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMphjubh)}(h **Return**h]j)}(hj:h]hReturn}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMshjubh)}(hThe value **fn** returns.h](h The value }(hjPhhhNhNubj)}(h**fn**h]hfn}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubh returns.}(hjPhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMshjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$freeze_workqueues_begin (C function)c.freeze_workqueues_beginhNtauh1jhjhhhNhNubj)}(hhh](j)}(h#void freeze_workqueues_begin (void)h]j )}(h"void freeze_workqueues_begin(void)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hfreeze_workqueues_beginh]jO)}(hfreeze_workqueues_beginh]hfreeze_workqueues_begin}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(hbegin freezing workqueuesh]hbegin freezing workqueues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX$**Parameters** ``void`` no arguments **Description** Start freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist. **Context** Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hj9h]hvoid}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj3ubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjNhMhjOubah}(h]h ]h"]h$]h&]uh1jvhj3ubeh}(h]h ]h"]h$]h&]uh1jWhjNhMhj0ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjth]h Description}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hStart freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist.h]hStart freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h=Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h]h?Grabs and releases wq_pool_mutex, wq->mutex and pool->lock’s.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j#freeze_workqueues_busy (C function)c.freeze_workqueues_busyhNtauh1jhjhhhNhNubj)}(hhh](j)}(h"bool freeze_workqueues_busy (void)h]j )}(h!bool freeze_workqueues_busy(void)h](j)}(hj7&h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(hfreeze_workqueues_busyh]jO)}(hfreeze_workqueues_busyh]hfreeze_workqueues_busy}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h$are freezable workqueues still busy?h]h$are freezable workqueues still busy?}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjChhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj^jj^jjjuh1jhhhjhNhNubj)}(hXK**Parameters** ``void`` no arguments **Description** Check whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues(). **Context** Grabs and releases wq_pool_mutex. **Return** ``true`` if some freezable workqueues are still busy. ``false`` if freezing is complete.h](h)}(h**Parameters**h]j)}(hjhh]h Parameters}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjbubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhj~ubah}(h]h ]h"]h$]h&]uh1jRhjbubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjbubh)}(hzCheck whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues().h]hzCheck whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues().}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjbubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjbubh)}(h!Grabs and releases wq_pool_mutex.h]h!Grabs and releases wq_pool_mutex.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjbubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjbubh)}(hY``true`` if some freezable workqueues are still busy. ``false`` if freezing is complete.h](j)}(h``true``h]htrue}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubh/ if some freezable workqueues are still busy. }(hj&hhhNhNubj)}(h ``false``h]hfalse}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubh if freezing is complete.}(hj&hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjbubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jthaw_workqueues (C function)c.thaw_workqueueshNtauh1jhjhhhNhNubj)}(hhh](j)}(hvoid thaw_workqueues (void)h]j )}(hvoid thaw_workqueues(void)h](j)}(hvoidh]hvoid}(hjuhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjqhhhjhMubjI)}(hthaw_workqueuesh]jO)}(hthaw_workqueuesh]hthaw_workqueues}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjqhhhjhMubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjqhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjmhhhjhMubah}(h]jhah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjjhhubj{)}(hhh]h)}(hthaw workqueuesh]hthaw workqueues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``void`` no arguments **Description** Thaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists. **Context** Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj2hMhj3ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj2hMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjXh]h Description}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hThaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists.h]hThaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists.}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(h=Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h]h?Grabs and releases wq_pool_mutex, wq->mutex and pool->lock’s.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j.workqueue_unbound_exclude_cpumask (C function)#c.workqueue_unbound_exclude_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(hEint workqueue_unbound_exclude_cpumask (cpumask_var_t exclude_cpumask)h]j )}(hDint workqueue_unbound_exclude_cpumask(cpumask_var_t exclude_cpumask)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjhhhjhMubjI)}(h!workqueue_unbound_exclude_cpumaskh]jO)}(h!workqueue_unbound_exclude_cpumaskh]h!workqueue_unbound_exclude_cpumask}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjhhhjhMubj)}(h(cpumask_var_t exclude_cpumask)h]j)}(hcpumask_var_t exclude_cpumaskh](h)}(hhh]jO)}(h cpumask_var_th]h cpumask_var_t}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetjmodnameN classnameNjojr)}ju]jx)}jkjsb#c.workqueue_unbound_exclude_cpumaskasbuh1hhjubj8)}(h h]h }(hj$hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjubjO)}(hexclude_cpumaskh]hexclude_cpumask}(hj2hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjhhhjhMubah}(h]jah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMhjhhubj{)}(hhh]h)}(h'Exclude given CPUs from unbound cpumaskh]h'Exclude given CPUs from unbound cpumask}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjYhhubah}(h]h ]h"]h$]h&]uh1jzhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjtjjtjjjuh1jhhhjhNhNubj)}(h**Parameters** ``cpumask_var_t exclude_cpumask`` the cpumask to be excluded from wq_unbound_cpumask **Description** This function can be called from cpuset code to provide a set of isolated CPUs that should be excluded from wq_unbound_cpumask.h](h)}(h**Parameters**h]j)}(hj~h]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjxubjS)}(hhh]jX)}(hU``cpumask_var_t exclude_cpumask`` the cpumask to be excluded from wq_unbound_cpumask h](j^)}(h!``cpumask_var_t exclude_cpumask``h]j)}(hjh]hcpumask_var_t exclude_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h2the cpumask to be excluded from wq_unbound_cpumaskh]h2the cpumask to be excluded from wq_unbound_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjxubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjxubh)}(hThis function can be called from cpuset code to provide a set of isolated CPUs that should be excluded from wq_unbound_cpumask.h]hThis function can be called from cpuset code to provide a set of isolated CPUs that should be excluded from wq_unbound_cpumask.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjxubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j*workqueue_set_unbound_cpumask (C function)c.workqueue_set_unbound_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(h9int workqueue_set_unbound_cpumask (cpumask_var_t cpumask)h]j )}(h8int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)h](j)}(hinth]hint}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMiubj8)}(h h]h }(hj, hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj hhhj+ hMiubjI)}(hworkqueue_set_unbound_cpumaskh]jO)}(hworkqueue_set_unbound_cpumaskh]hworkqueue_set_unbound_cpumask}(hj> hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj: ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj hhhj+ hMiubj)}(h(cpumask_var_t cpumask)h]j)}(hcpumask_var_t cpumaskh](h)}(hhh]jO)}(h cpumask_var_th]h cpumask_var_t}(hj] hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjZ ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj_ modnameN classnameNjojr)}ju]jx)}jkj@ sbc.workqueue_set_unbound_cpumaskasbuh1hhjV ubj8)}(h h]h }(hj} hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjV ubjO)}(hcpumaskh]hcpumask}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjV ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjR ubah}(h]h ]h"]h$]h&]jjuh1jhj hhhj+ hMiubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj hhhj+ hMiubah}(h]j ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj+ hMihj hhubj{)}(hhh]h)}(h!Set the low-level unbound cpumaskh]h!Set the low-level unbound cpumask}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM^hj hhubah}(h]h ]h"]h$]h&]uh1jzhj hhhj+ hMiubeh}(h]h ](jfunctioneh"]h$]h&]jjjj jj jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``cpumask_var_t cpumask`` the cpumask to set The low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them. **Return** 0 - Success -EINVAL - Invalid **cpumask** -ENOMEM - Failed to allocate memory for attrs or pwqs.h](h)}(h**Parameters**h]j)}(hj h]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMbhj ubjS)}(hhh]jX)}(hX``cpumask_var_t cpumask`` the cpumask to set The low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them. h](j^)}(h``cpumask_var_t cpumask``h]j)}(hj h]hcpumask_var_t cpumask}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMchj ubjw)}(hhh](h)}(hthe cpumask to seth]hthe cpumask to set}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM_hj ubh)}(hThe low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them.h](hThe low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the }(hj hhhNhNubj)}(h **cpumask**h]hcpumask}(hj& hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhE and apply it to all unbound workqueues and updates all pwqs of them.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMahj ubeh}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhj hMchj ubah}(h]h ]h"]h$]h&]uh1jRhj ubh)}(h **Return**h]j)}(hjS h]hReturn}(hjU hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQ ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMehj ubjS)}(hhh]jX)}(hf0 - Success -EINVAL - Invalid **cpumask** -ENOMEM - Failed to allocate memory for attrs or pwqs.h](j^)}(h0 - Successh]h0 - Success}(hjp hhhNhNubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMfhjl ubjw)}(hhh]h)}(hT-EINVAL - Invalid **cpumask** -ENOMEM - Failed to allocate memory for attrs or pwqs.h](h-EINVAL - Invalid }(hj hhhNhNubj)}(h **cpumask**h]hcpumask}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh7 -ENOMEM - Failed to allocate memory for attrs or pwqs.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj~ hMfhj ubah}(h]h ]h"]h$]h&]uh1jvhjl ubeh}(h]h ]h"]h$]h&]uh1jWhj~ hMfhji ubah}(h]h ]h"]h$]h&]uh1jRhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%workqueue_sysfs_register (C function)c.workqueue_sysfs_registerhNtauh1jhjhhhNhNubj)}(hhh](j)}(h:int workqueue_sysfs_register (struct workqueue_struct *wq)h]j )}(h9int workqueue_sysfs_register(struct workqueue_struct *wq)h](j)}(hinth]hint}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hj hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj hhhj hMubjI)}(hworkqueue_sysfs_registerh]jO)}(hworkqueue_sysfs_registerh]hworkqueue_sysfs_register}(hj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj hhhj hMubj)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j&)}(hj)h]hstruct}(hj hhhNhNubah}(h]h ]j2ah"]h$]h&]uh1j%hj ubj8)}(h h]h }(hj hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubh)}(hhh]jO)}(hworkqueue_structh]hworkqueue_struct}(hj/ hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj, ubah}(h]h ]h"]h$]h&] refdomainjreftypejk reftargetj1 modnameN classnameNjojr)}ju]jx)}jkj sbc.workqueue_sysfs_registerasbuh1hhj ubj8)}(h h]h }(hjO hhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj ubj)}(hjah]h*}(hj] hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubjO)}(hwqh]hwq}(hjj hhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubah}(h]h ]h"]h$]h&]jjuh1jhj hhhj hMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj hhhj hMubah}(h]j ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj hMhj hhubj{)}(hhh]h)}(h!make a workqueue visible in sysfsh]h!make a workqueue visible in sysfs}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj hhubah}(h]h ]h"]h$]h&]uh1jzhj hhhj hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj jj jjjuh1jhhhjhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` the workqueue to register **Description** Expose **wq** in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method. Workqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes. **Return** 0 on success, -errno on failure.h](h)}(h**Parameters**h]j)}(hj h]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubjS)}(hhh]jX)}(h:``struct workqueue_struct *wq`` the workqueue to register h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj h]hstruct workqueue_struct *wq}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubjw)}(hhh]h)}(hthe workqueue to registerh]hthe workqueue to register}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhj hMhj ubah}(h]h ]h"]h$]h&]uh1jRhj ubh)}(h**Description**h]j)}(hj h]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubh)}(hExpose **wq** in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method.h](hExpose }(hj& hhhNhNubj)}(h**wq**h]hwq}(hj. hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj& ubh in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method.}(hj& hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubh)}(hWorkqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes.h]hWorkqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes.}(hjG hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubh)}(h **Return**h]j)}(hjX h]hReturn}(hjZ hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjV ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubh)}(h 0 on success, -errno on failure.h]h 0 on success, -errno on failure.}(hjn hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j'workqueue_sysfs_unregister (C function)c.workqueue_sysfs_unregisterhNtauh1jhjhhhNhNubj)}(hhh](j)}(h=void workqueue_sysfs_unregister (struct workqueue_struct *wq)h]j )}(hhM6ubjI)}(hworkqueue_init_earlyh]jO)}(hworkqueue_init_earlyh]hworkqueue_init_early}(hjQhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjMubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj,hhhj>hM6ubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubah}(h]h ]h"]h$]h&]noemphjjuh1jhjeubah}(h]h ]h"]h$]h&]jjuh1jhj,hhhj>hM6ubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhj(hhhj>hM6ubah}(h]j#ah ](jrjseh"]h$]h&]jwjx)jyhuh1jhj>hM6hj%hhubj{)}(hhh]h)}(h"early init for workqueue subsystemh]h"early init for workqueue subsystem}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM-hjhhubah}(h]h ]h"]h$]h&]uh1jzhj%hhhj>hM6ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``void`` no arguments **Description** This is the first step of three-staged workqueue subsystem initialization and invoked as soon as the bare basics - memory allocation, cpumasks and idr are up. It sets up all the data structures and system workqueues and allows early boot code to create workqueues and queue/cancel work items. Actual work item execution starts only after kthreads can be created and scheduled right before early initcalls.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM1hjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM4hjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM4hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM4hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM6hjubh)}(hXThis is the first step of three-staged workqueue subsystem initialization and invoked as soon as the bare basics - memory allocation, cpumasks and idr are up. It sets up all the data structures and system workqueues and allows early boot code to create workqueues and queue/cancel work items. Actual work item execution starts only after kthreads can be created and scheduled right before early initcalls.h]hXThis is the first step of three-staged workqueue subsystem initialization and invoked as soon as the bare basics - memory allocation, cpumasks and idr are up. It sets up all the data structures and system workqueues and allows early boot code to create workqueues and queue/cancel work items. Actual work item execution starts only after kthreads can be created and scheduled right before early initcalls.}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chM.hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworkqueue_init (C function)c.workqueue_inithNtauh1jhjhhhNhNubj)}(hhh](j)}(hvoid workqueue_init (void)h]j )}(hvoid workqueue_init(void)h](j)}(hvoidh]hvoid}(hjXhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjThhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMubj8)}(h h]h }(hjghhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hjThhhjfhMubjI)}(hworkqueue_inith]jO)}(hworkqueue_inith]hworkqueue_init}(hjyhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjuubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhjThhhjfhMubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhjThhhjfhMubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjPhhhjfhMubah}(h]jKah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjfhMhjMhhubj{)}(hhh]h)}(h&bring workqueue subsystem fully onlineh]h&bring workqueue subsystem fully online}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jzhjMhhhjfhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(hX**Parameters** ``void`` no arguments **Description** This is the second step of three-staged workqueue subsystem initialization and invoked as soon as kthreads can be created and scheduled. Workqueues have been created and work items queued on them, but there are no kworkers executing the work items yet. Populate the worker pools with the initial workers and enable future kworker creations.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj;h]h Description}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubh)}(hXTThis is the second step of three-staged workqueue subsystem initialization and invoked as soon as kthreads can be created and scheduled. Workqueues have been created and work items queued on them, but there are no kworkers executing the work items yet. Populate the worker pools with the initial workers and enable future kworker creations.h]hXTThis is the second step of three-staged workqueue subsystem initialization and invoked as soon as kthreads can be created and scheduled. Workqueues have been created and work items queued on them, but there are no kworkers executing the work items yet. Populate the worker pools with the initial workers and enable future kworker creations.}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$workqueue_init_topology (C function)c.workqueue_init_topologyhNtauh1jhjhhhNhNubj)}(hhh](j)}(h#void workqueue_init_topology (void)h]j )}(h"void workqueue_init_topology(void)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMJubj8)}(h h]h }(hjhhhNhNubah}(h]h ]jDah"]h$]h&]uh1j7hj|hhhjhMJubjI)}(hworkqueue_init_topologyh]jO)}(hworkqueue_init_topologyh]hworkqueue_init_topology}(hjhhhNhNubah}(h]h ]jZah"]h$]h&]uh1jNhjubah}(h]h ](jajbeh"]h$]h&]jjuh1jHhj|hhhjhMJubj)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1jhj|hhhjhMJubeh}(h]h ]h"]h$]h&]jjjluh1jjmjnhjxhhhjhMJubah}(h]jsah ](jrjseh"]h$]h&]jwjx)jyhuh1jhjhMJhjuhhubj{)}(hhh]h)}(h*initialize CPU pods for unbound workqueuesh]h*initialize CPU pods for unbound workqueues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMDhjhhubah}(h]h ]h"]h$]h&]uh1jzhjuhhhjhMJubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jhhhjhNhNubj)}(h**Parameters** ``void`` no arguments **Description** This is the third step of three-staged workqueue subsystem initialization and invoked after SMP and topology information are fully initialized. It initializes the unbound CPU pods accordingly.h](h)}(h**Parameters**h]j)}(hj h]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMHhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hj(h]hvoid}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMKhj"ubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=hMKhj>ubah}(h]h ]h"]h$]h&]uh1jvhj"ubeh}(h]h ]h"]h$]h&]uh1jWhj=hMKhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjch]h Description}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMMhjubh)}(hThis is the third step of three-staged workqueue subsystem initialization and invoked after SMP and topology information are fully initialized. It initializes the unbound CPU pods accordingly.h]hThis is the third step of three-staged workqueue subsystem initialization and invoked after SMP and topology information are fully initialized. It initializes the unbound CPU pods accordingly.}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:783: ./kernel/workqueue.chMEhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjhhhNhNubeh}(h]&kernel-inline-documentations-referenceah ]h"]&kernel inline documentations referenceah$]h&]uh1hhhhhhhhM ubeh}(h] workqueueah ]h"] workqueueah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksj footnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjerror_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(jjjjjgjdjjjgj'j$jojljjjjj jj j jjj?j<j(j%jjjjj"jjejbjSjPjjjju nametypes}(jjjgjjj'jojjj j jj?j(jjj"jejSjjuh}(jhjjjdjjgjjj$jmjlj8jjrjj*jjj j jj j<j! j%jBjj+jjjjjbj%jPjhjjVjjjjjajfjEjJj)j.jujzjW"j\"jS$jX$j(&j-&j")j')j+j+j.j.jv0j{0jJ2jO2j4j4jf7jk7jp9ju9j:j:j<j<jf=jk=j>j>j@j@jBjBjEj Ej3Gj8GjHjHjoJjtJjMjMjQj$QjTjTjLVjQVjWjWjNYjSYjZjZj\j\j_j_jajajcjcj2ej7ejfjfj ijij>kjCkjljlj=ojBojpjpjMsjRsjwjwjzjzj~j~jjjjjjjĉjɉjjjJjOjjj=jBjYj^jΘjӘjOjTjߛjj؝jݝjjjJjOjjjijnj jjj#jjjjjjjjjjjljqj jj`jejjjZj_jgjljqjvjHjMjjjHjMjjjZj_jjj0j5jjjjj`jejjjRjWjjjjjoj3j8jjjjjjjjjjjjjjjjjjj7j<jjjjjjjqjvjjjj jjjjjhjmjjj j j j j j j#j(jKjPjsjxu footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.