sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget&/translations/zh_CN/core-api/workqueuemodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/zh_TW/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/it_IT/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/ja_JP/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/ko_KR/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/sp_SP/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhsection)}(hhh](htitle)}(h Workqueueh]h Workqueue}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhh@/var/lib/git/docbuild/linux/Documentation/core-api/workqueue.rsthKubh field_list)}(hhh](hfield)}(hhh](h field_name)}(hDateh]hDate}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhKubh field_body)}(hSeptember, 2010h]h paragraph)}(hhh]hSeptember, 2010}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubah}(h]h ]h"]h$]h&]uh1hhhubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hAuthorh]hAuthor}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhKubh)}(hTejun Heo h]h)}(hjh](h Tejun Heo <}(hjhhhNhNubh reference)}(h tj@kernel.orgh]h tj@kernel.org}(hjhhhNhNubah}(h]h ]h"]h$]h&]refurimailto:tj@kernel.orguh1jhjubh>}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1hhhubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hAuthorh]hAuthor}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj6hhhKubh)}(h'Florian Mickler h]h)}(h%Florian Mickler h](hFlorian Mickler <}(hjKhhhNhNubj)}(hflorian@mickler.orgh]hflorian@mickler.org}(hjShhhNhNubah}(h]h ]h"]h$]h&]refurimailto:florian@mickler.orguh1jhjKubh>}(hjKhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjGubah}(h]h ]h"]h$]h&]uh1hhj6ubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubeh}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Introductionh]h Introduction}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK ubh)}(hThere are many cases where an asynchronous process execution context is needed and the workqueue (wq) API is the most commonly used mechanism for such cases.h]hThere are many cases where an asynchronous process execution context is needed and the workqueue (wq) API is the most commonly used mechanism for such cases.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hjhhubh)}(hWhen such an asynchronous execution context is needed, a work item describing which function to execute is put on a queue. An independent thread serves as the asynchronous execution context. The queue is called workqueue and the thread is called worker.h]hWhen such an asynchronous execution context is needed, a work item describing which function to execute is put on a queue. An independent thread serves as the asynchronous execution context. The queue is called workqueue and the thread is called worker.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXWhile there are work items on the workqueue the worker executes the functions associated with the work items one after the other. When there is no work item left on the workqueue the worker becomes idle. When a new work item gets queued, the worker begins executing again.h]hXWhile there are work items on the workqueue the worker executes the functions associated with the work items one after the other. When there is no work item left on the workqueue the worker becomes idle. When a new work item gets queued, the worker begins executing again.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] introductionah ]h"] introductionah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(h"Why Concurrency Managed Workqueue?h]h"Why Concurrency Managed Workqueue?}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXIn the original wq implementation, a multi threaded (MT) wq had one worker thread per CPU and a single threaded (ST) wq had one worker thread system-wide. A single MT wq needed to keep around the same number of workers as the number of CPUs. The kernel grew a lot of MT wq users over the years and with the number of CPU cores continuously rising, some systems saturated the default 32k PID space just booting up.h]hXIn the original wq implementation, a multi threaded (MT) wq had one worker thread per CPU and a single threaded (ST) wq had one worker thread system-wide. A single MT wq needed to keep around the same number of workers as the number of CPUs. The kernel grew a lot of MT wq users over the years and with the number of CPU cores continuously rising, some systems saturated the default 32k PID space just booting up.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXAlthough MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context.h]hXAlthough MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK'hjhhubh)}(hXThe tension between the provided level of concurrency and resource usage also forced its users to make unnecessary tradeoffs like libata choosing to use ST wq for polling PIOs and accepting an unnecessary limitation that no two polling PIOs can progress at the same time. As MT wq don't provide much better concurrency, users which require higher level of concurrency, like async or fscache, had to implement their own thread pool.h]hXThe tension between the provided level of concurrency and resource usage also forced its users to make unnecessary tradeoffs like libata choosing to use ST wq for polling PIOs and accepting an unnecessary limitation that no two polling PIOs can progress at the same time. As MT wq don’t provide much better concurrency, users which require higher level of concurrency, like async or fscache, had to implement their own thread pool.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK/hjhhubh)}(hcConcurrency Managed Workqueue (cmwq) is a reimplementation of wq with focus on the following goals.h]hcConcurrency Managed Workqueue (cmwq) is a reimplementation of wq with focus on the following goals.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK7hjhhubh bullet_list)}(hhh](h list_item)}(h8Maintain compatibility with the original workqueue API. h]h)}(h7Maintain compatibility with the original workqueue API.h]h7Maintain compatibility with the original workqueue API.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK:hjubah}(h]h ]h"]h$]h&]uh1jhj hhhhhNubj)}(hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource. h]h)}(hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource.h]hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource.}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK= 3, ::h](hAnd with cmwq with }(hjhhhNhNubj)}(h``@max_active``h]h @max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh >= 3,}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM+hjhhubj)}(hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 10 w2 starts and burns CPU 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishesh]hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 10 w2 starts and burns CPU 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishes}hjsbah}(h]h ]h"]h$]h&]jjuh1jhhhM-hjhhubh)}(hIf ``@max_active`` == 2, ::h](hIf }(hj hhhNhNubj)}(h``@max_active``h]h @max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh == 2,}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM9hjhhubj)}(hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 20 w2 starts and burns CPU 25 w2 sleeps 35 w2 wakes up and finishesh]hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 20 w2 starts and burns CPU 25 w2 sleeps 35 w2 wakes up and finishes}hj)sbah}(h]h ]h"]h$]h&]jjuh1jhhhM;hjhhubh)}(hbNow, let's assume w1 and w2 are queued to a different wq q1 which has ``WQ_CPU_INTENSIVE`` set, ::h](hHNow, let’s assume w1 and w2 are queued to a different wq q1 which has }(hj7hhhNhNubj)}(h``WQ_CPU_INTENSIVE``h]hWQ_CPU_INTENSIVE}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubh set,}(hj7hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMGhjhhubj)}(hXFTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 and w2 start and burn CPU 10 w1 sleeps 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishesh]hXFTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 and w2 start and burn CPU 10 w1 sleeps 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishes}hjWsbah}(h]h ]h"]h$]h&]jjuh1jhhhMJhjhhubeh}(h]example-execution-scenariosah ]h"]example execution scenariosah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h Guidelinesh]h Guidelines}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1hhjmhhhhhMWubj )}(hhh](j)}(hXMDo not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work items which are used during memory reclaim. Each wq with ``WQ_MEM_RECLAIM`` set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with ``WQ_MEM_RECLAIM``. h]h)}(hXLDo not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work items which are used during memory reclaim. Each wq with ``WQ_MEM_RECLAIM`` set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with ``WQ_MEM_RECLAIM``.h](hDo not forget to use }(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhT if a wq may process work items which are used during memory reclaim. Each wq with }(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with }(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMYhjubah}(h]h ]h"]h$]h&]uh1jhj~hhhhhNubj)}(hCUnless strict ordering is required, there is no need to use ST wq. h]h)}(hBUnless strict ordering is required, there is no need to use ST wq.h]hBUnless strict ordering is required, there is no need to use ST wq.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM`hjubah}(h]h ]h"]h$]h&]uh1jhj~hhhhhNubj)}(hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit. h]h)}(hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit.h]hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMbhjubah}(h]h ]h"]h$]h&]uh1jhj~hhhhhNubj)}(hXA wq serves as a domain for forward progress guarantee (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items which are not involved in memory reclaim and don't need to be flushed as a part of a group of work items, and don't require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq. Note: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq. h](h)}(hXA wq serves as a domain for forward progress guarantee (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items which are not involved in memory reclaim and don't need to be flushed as a part of a group of work items, and don't require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq.h](h8A wq serves as a domain for forward progress guarantee (}(hj hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhXE, flush and work item attributes. Work items which are not involved in memory reclaim and don’t need to be flushed as a part of a group of work items, and don’t require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMfhjubh)}(hNote: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq.h]hNote: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq.}(hj# hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMnhjubeh}(h]h ]h"]h$]h&]uh1jhj~hhhhhNubj)}(hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution. h]h)}(hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution.h]hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution.}(hj; hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMshj7 ubah}(h]h ]h"]h$]h&]uh1jhj~hhhhhNubeh}(h]h ]h"]h$]h&]j`jauh1j hhhMYhjmhhubeh}(h] guidelinesah ]h"] guidelinesah$]h&]uh1hhhhhhhhMWubh)}(hhh](h)}(hAffinity Scopesh]hAffinity Scopes}(hj` hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj] hhhhhMyubh)}(hXAn unbound workqueue groups CPUs according to its affinity scope to improve cache locality. For example, if a workqueue is using the default affinity scope of "cache", it will group CPUs according to last level cache boundaries. A work item queued on the workqueue will be assigned to a worker on one of the CPUs which share the last level cache with the issuing CPU. Once started, the worker may or may not be allowed to move outside the scope depending on the ``affinity_strict`` setting of the scope.h](hXAn unbound workqueue groups CPUs according to its affinity scope to improve cache locality. For example, if a workqueue is using the default affinity scope of “cache”, it will group CPUs according to last level cache boundaries. A work item queued on the workqueue will be assigned to a worker on one of the CPUs which share the last level cache with the issuing CPU. Once started, the worker may or may not be allowed to move outside the scope depending on the }(hjn hhhNhNubj)}(h``affinity_strict``h]haffinity_strict}(hjv hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjn ubh setting of the scope.}(hjn hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM{hj] hhubh)}(h;Workqueue currently supports the following affinity scopes.h]h;Workqueue currently supports the following affinity scopes.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj] hhubjS)}(hhh](jX)}(h``default`` Use the scope in module parameter ``workqueue.default_affinity_scope`` which is always set to one of the scopes below. h](j^)}(h ``default``h]j)}(hj h]hdefault}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh]h)}(hvUse the scope in module parameter ``workqueue.default_affinity_scope`` which is always set to one of the scopes below.h](h"Use the scope in module parameter }(hj hhhNhNubj)}(h$``workqueue.default_affinity_scope``h]h workqueue.default_affinity_scope}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh0 which is always set to one of the scopes below.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhj ubjX)}(h``cpu`` CPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management. h](j^)}(h``cpu``h]j)}(hj h]hcpu}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh]h)}(hCPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management.h]hCPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhj hhubjX)}(h``smt`` CPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together. h](j^)}(h``smt``h]j)}(hj' h]hsmt}(hj) hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj% ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj! ubjw)}(hhh]h)}(hCPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together.h]hCPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together.}(hj? hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj< ubah}(h]h ]h"]h$]h&]uh1jvhj! ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhj hhubjX)}(h``cache`` CPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases. This is the default affinity scope. h](j^)}(h ``cache``h]j)}(hj_ h]hcache}(hja hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj] ubah}(h]h ]h"]h$]h&]uh1j]hhhMhjY ubjw)}(hhh]h)}(hCPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases. This is the default affinity scope.h]hCPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases. This is the default affinity scope.}(hjw hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjt ubah}(h]h ]h"]h$]h&]uh1jvhjY ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhj hhubjX)}(h8``numa`` CPUs are grouped according to NUMA boundaries. h](j^)}(h``numa``h]j)}(hj h]hnuma}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh]h)}(h.CPUs are grouped according to NUMA boundaries.h]h.CPUs are grouped according to NUMA boundaries.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhj hhubjX)}(h``system`` All CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU. h](j^)}(h ``system``h]j)}(hj h]hsystem}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh]h)}(hwAll CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU.h]hwAll CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhj hhubeh}(h]h ]h"]h$]h&]uh1jRhj] hhhhhNubh)}(hThe default affinity scope can be changed with the module parameter ``workqueue.default_affinity_scope`` and a specific workqueue's affinity scope can be changed using ``apply_workqueue_attrs()``.h](hDThe default affinity scope can be changed with the module parameter }(hj hhhNhNubj)}(h$``workqueue.default_affinity_scope``h]h workqueue.default_affinity_scope}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhB and a specific workqueue’s affinity scope can be changed using }(hj hhhNhNubj)}(h``apply_workqueue_attrs()``h]happly_workqueue_attrs()}(hj! hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj] hhubh)}(hIf ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope related interface files under its ``/sys/devices/virtual/workqueue/WQ_NAME/`` directory.h](hIf }(hj9 hhhNhNubj)}(h ``WQ_SYSFS``h]hWQ_SYSFS}(hjA hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9 ubh` is set, the workqueue will have the following affinity scope related interface files under its }(hj9 hhhNhNubj)}(h+``/sys/devices/virtual/workqueue/WQ_NAME/``h]h'/sys/devices/virtual/workqueue/WQ_NAME/}(hjS hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9 ubh directory.}(hj9 hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj] hhubjS)}(hhh](jX)}(h``affinity_scope`` Read to see the current affinity scope. Write to change. When default is the current scope, reading this file will also show the current effective scope in parentheses, for example, ``default (cache)``. h](j^)}(h``affinity_scope``h]j)}(hjt h]haffinity_scope}(hjv hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjr ubah}(h]h ]h"]h$]h&]uh1j]hhhMhjn ubjw)}(hhh](h)}(h8Read to see the current affinity scope. Write to change.h]h8Read to see the current affinity scope. Write to change.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubh)}(hWhen default is the current scope, reading this file will also show the current effective scope in parentheses, for example, ``default (cache)``.h](h}When default is the current scope, reading this file will also show the current effective scope in parentheses, for example, }(hj hhhNhNubj)}(h``default (cache)``h]hdefault (cache)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj ubeh}(h]h ]h"]h$]h&]uh1jvhjn ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhjk ubjX)}(hX``affinity_strict`` 0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available. If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels. h](j^)}(h``affinity_strict``h]j)}(hj h]haffinity_strict}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1j]hhhMhj ubjw)}(hhh](h)}(hX0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available.h]hX0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubh)}(hX/If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels.h]hX/If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubeh}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhhhMhjk hhubeh}(h]h ]h"]h$]h&]uh1jRhj] hhhhhNubeh}(h]affinity-scopesah ]h"]affinity scopesah$]h&]uh1hhhhhhhhMyubh)}(hhh](h)}(hAffinity Scopes and Performanceh]hAffinity Scopes and Performance}(hCj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubh)}(hX%It'd be ideal if an unbound workqueue's behavior is optimal for vast majority of use cases without further tuning. Unfortunately, in the current kernel, there exists a pronounced trade-off between locality and utilization necessitating explicit configurations when workqueues are heavily used.h]hX)It’d be ideal if an unbound workqueue’s behavior is optimal for vast majority of use cases without further tuning. Unfortunately, in the current kernel, there exists a pronounced trade-off between locality and utilization necessitating explicit configurations when workqueues are heavily used.}(hj+ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hXcHigher locality leads to higher efficiency where more work is performed for the same number of consumed CPU cycles. However, higher locality may also cause lower overall system utilization if the work items are not spread enough across the affinity scopes by the issuers. The following performance testing with dm-crypt clearly illustrates this trade-off.h]hXcHigher locality leads to higher efficiency where more work is performed for the same number of consumed CPU cycles. However, higher locality may also cause lower overall system utilization if the work items are not spread enough across the affinity scopes by the issuers. The following performance testing with dm-crypt clearly illustrates this trade-off.}(hj9 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hXThe tests are run on a CPU with 12-cores/24-threads split across four L3 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency. ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and opened with ``cryptsetup`` with default settings.h](hThe tests are run on a CPU with 12-cores/24-threads split across four L3 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency. }(hjG hhhNhNubj)}(h ``/dev/dm-0``h]h /dev/dm-0}(hjO hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjG ubhL is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and opened with }(hjG hhhNhNubj)}(h``cryptsetup``h]h cryptsetup}(hja hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjG ubh with default settings.}(hjG hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hhh](h)}(h=Scenario 1: Enough issuers and work spread across the machineh]h=Scenario 1: Enough issuers and work spread across the machine}(hj| hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjy hhhhhMubh)}(hThe command used: ::h]hThe command used:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjy hhubj)}(h$ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \ --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \ --name=iops-test-job --verify=sha512h]h$ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \ --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \ --name=iops-test-job --verify=sha512}hj sbah}(h]h ]h"]h$]h&]jjuh1jhhhMhjy hhubh)}(hXThere are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512`` makes ``fio`` generate and read back the content each time which makes execution locality matter between the issuer and ``kcryptd``. The following are the read bandwidths and CPU utilizations depending on different affinity scope settings on ``kcryptd`` measured over five runs. Bandwidths are in MiBps, and CPU util in percents.h](h8There are 24 issuers, each issuing 64 IOs concurrently. }(hj hhhNhNubj)}(h``--verify=sha512``h]h--verify=sha512}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh makes }(hj hhhNhNubj)}(h``fio``h]hfio}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhk generate and read back the content each time which makes execution locality matter between the issuer and }(hj hhhNhNubj)}(h ``kcryptd``h]hkcryptd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubho. The following are the read bandwidths and CPU utilizations depending on different affinity scope settings on }(hj hhhNhNubj)}(h ``kcryptd``h]hkcryptd}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhL measured over five runs. Bandwidths are in MiBps, and CPU util in percents.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjy hhubhtable)}(hhh]htgroup)}(hhh](hcolspec)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1j hj ubj )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1j hj ubj )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1j hj ubhthead)}(hhh]hrow)}(hhh](hentry)}(hhh]h)}(hAffinityh]hAffinity}(hj3 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0 ubah}(h]h ]h"]h$]h&]uh1j. hj+ ubj/ )}(hhh]h)}(hBandwidth (MiBps)h]hBandwidth (MiBps)}(hjJ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjG ubah}(h]h ]h"]h$]h&]uh1j. hj+ ubj/ )}(hhh]h)}(h CPU util (%)h]h CPU util (%)}(hja hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj^ ubah}(h]h ]h"]h$]h&]uh1j. hj+ ubeh}(h]h ]h"]h$]h&]uh1j) hj& ubah}(h]h ]h"]h$]h&]uh1j$ hj ubhtbody)}(hhh](j* )}(hhh](j/ )}(hhh]h)}(hsystemh]hsystem}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j. hj ubj/ )}(hhh]h)}(h1159.40 ±1.34h]h1159.40 ±1.34}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j. hj ubj/ )}(hhh]h)}(h 99.31 ±0.02h]h 99.31 ±0.02}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j. hj ubeh}(h]h ]h"]h$]h&]uh1j) hj ubj* )}(hhh](j/ )}(hhh]h)}(hcacheh]hcache}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j. hj ubj/ )}(hhh]h)}(h1166.40 ±0.89h]h1166.40 ±0.89}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j. hj ubj/ )}(hhh]h)}(h 99.34 ±0.01h]h 99.34 ±0.01}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j. hj ubeh}(h]h ]h"]h$]h&]uh1j) hj ubj* )}(hhh](j/ )}(hhh]h)}(hcache (strict)h]hcache (strict)}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj%ubah}(h]h ]h"]h$]h&]uh1j. hj"ubj/ )}(hhh]h)}(h1166.00 ±0.71h]h1166.00 ±0.71}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj<ubah}(h]h ]h"]h$]h&]uh1j. hj"ubj/ )}(hhh]h)}(h 99.35 ±0.01h]h 99.35 ±0.01}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjSubah}(h]h ]h"]h$]h&]uh1j. hj"ubeh}(h]h ]h"]h$]h&]uh1j) hj ubeh}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]colsKuh1j hj ubah}(h]h ]colwidths-givenah"]h$]h&]uh1j hjy hhhNhNubh)}(hWith enough issuers spread across the system, there is no downside to "cache", strict or otherwise. All three configurations saturate the whole machine but the cache-affine ones outperform by 0.6% thanks to improved locality.h]hWith enough issuers spread across the system, there is no downside to “cache”, strict or otherwise. All three configurations saturate the whole machine but the cache-affine ones outperform by 0.6% thanks to improved locality.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjy hhubeh}(h]hjubah}(h]h ]h"]h$]h&]uh1j. hjubj/ )}(hhh]h)}(h 973.40 ±1.52h]h 973.40 ±1.52}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM?hjubah}(h]h ]h"]h$]h&]uh1j. hjubj/ )}(hhh]h)}(h 74.90 ±0.07h]h 74.90 ±0.07}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM@hjubah}(h]h ]h"]h$]h&]uh1j. hjubeh}(h]h ]h"]h$]h&]uh1j) hjHubj* )}(hhh](j/ )}(hhh]h)}(hcache (strict)h]hcache (strict)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMBhjubah}(h]h ]h"]h$]h&]uh1j. hjubj/ )}(hhh]h)}(h 828.20 ±4.49h]h 828.20 ±4.49}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMChjubah}(h]h ]h"]h$]h&]uh1j. hjubj/ )}(hhh]h)}(h 66.84 ±0.29h]h 66.84 ±0.29}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMDhjubah}(h]h ]h"]h$]h&]uh1j. hjubeh}(h]h ]h"]h$]h&]uh1j) hjHubeh}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]colsKuh1j hjubah}(h]h ]jah"]h$]h&]uh1j hjhhhNhNubh)}(hNow, the tradeoff between locality and utilization is clearer. "cache" shows 2% bandwidth loss compared to "system" and "cache (struct)" whopping 20%.h]hNow, the tradeoff between locality and utilization is clearer. “cache” shows 2% bandwidth loss compared to “system” and “cache (struct)” whopping 20%.}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhjhhubeh}(h]9scenario-3-even-fewer-issuers-not-enough-work-to-saturateah ]h"];scenario 3: even fewer issuers, not enough work to saturateah$]h&]uh1hhj hhhhhM&ubh)}(hhh](h)}(hConclusion and Recommendationsh]hConclusion and Recommendations}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhj^hhhhhMKubh)}(hXIn the above experiments, the efficiency advantage of the "cache" affinity scope over "system" is, while consistent and noticeable, small. However, the impact is dependent on the distances between the scopes and may be more pronounced in processors with more complex topologies.h]hXIn the above experiments, the efficiency advantage of the “cache” affinity scope over “system” is, while consistent and noticeable, small. However, the impact is dependent on the distances between the scopes and may be more pronounced in processors with more complex topologies.}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMMhj^hhubh)}(hWhile the loss of work-conservation in certain scenarios hurts, it is a lot better than "cache (strict)" and maximizing workqueue utilization is unlikely to be the common case anyway. As such, "cache" is the default affinity scope for unbound pools.h]hXWhile the loss of work-conservation in certain scenarios hurts, it is a lot better than “cache (strict)” and maximizing workqueue utilization is unlikely to be the common case anyway. As such, “cache” is the default affinity scope for unbound pools.}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMRhj^hhubj )}(hhh](j)}(hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using ``apply_workqueue_attrs()`` and/or enable ``WQ_SYSFS``. h]h)}(hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using ``apply_workqueue_attrs()`` and/or enable ``WQ_SYSFS``.h](hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using }(hjhhhNhNubj)}(h``apply_workqueue_attrs()``h]happly_workqueue_attrs()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and/or enable }(hjhhhNhNubj)}(h ``WQ_SYSFS``h]hWQ_SYSFS}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMWhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hAn unbound workqueue with strict "cpu" affinity scope behaves the same as ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility. h]h)}(hAn unbound workqueue with strict "cpu" affinity scope behaves the same as ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility.h](hNAn unbound workqueue with strict “cpu” affinity scope behaves the same as }(hjhhhNhNubj)}(h``WQ_CPU_INTENSIVE``h]hWQ_CPU_INTENSIVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhu per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM\hjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hrAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict "numa" affinity scope. h]h)}(hqAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict "numa" affinity scope.h]huAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict “numa” affinity scope.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM`hjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hXRThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn't be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary. h]h)}(hXPThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn't be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary.h]hXRThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn’t be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMchj ubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]j`jauh1j hhhMWhj^hhubeh}(h]conclusion-and-recommendationsah ]h"]conclusion and recommendationsah$]h&]uh1hhj hhhhhMKubeh}(h]affinity-scopes-and-performanceah ]h"]affinity scopes and performanceah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(hExamining Configurationh]hExamining Configuration}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:hhhhhMkubh)}(hUse tools/workqueue/wq_dump.py to examine unbound CPU affinity configuration, worker pools and how workqueues map to the pools: ::h]hUse tools/workqueue/wq_dump.py to examine unbound CPU affinity configuration, worker pools and how workqueues map to the pools:}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMmhj:hhubj)}(hXb$ tools/workqueue/wq_dump.py Affinity Scopes =============== wq_unbound_cpumask=0000000f CPU nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 SMT nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 CACHE (default) nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 NUMA nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 SYSTEM nr_pods 1 pod_cpus [0]=0000000f pod_node [0]=-1 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0 Worker Pools ============ pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003 pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c Workqueue CPU -> pool ===================== [ workqueue \ CPU 0 1 2 3 dfl] events percpu 0 2 4 6 events_highpri percpu 1 3 5 7 events_long percpu 0 2 4 6 events_unbound unbound 9 9 10 10 8 events_freezable percpu 0 2 4 6 events_power_efficient percpu 0 2 4 6 events_freezable_pwr_ef percpu 0 2 4 6 rcu_gp percpu 0 2 4 6 rcu_par_gp percpu 0 2 4 6 slub_flushwq percpu 0 2 4 6 netns ordered 8 8 8 8 8 ...h]hXb$ tools/workqueue/wq_dump.py Affinity Scopes =============== wq_unbound_cpumask=0000000f CPU nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 SMT nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 CACHE (default) nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 NUMA nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 SYSTEM nr_pods 1 pod_cpus [0]=0000000f pod_node [0]=-1 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0 Worker Pools ============ pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003 pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c Workqueue CPU -> pool ===================== [ workqueue \ CPU 0 1 2 3 dfl] events percpu 0 2 4 6 events_highpri percpu 1 3 5 7 events_long percpu 0 2 4 6 events_unbound unbound 9 9 10 10 8 events_freezable percpu 0 2 4 6 events_power_efficient percpu 0 2 4 6 events_freezable_pwr_ef percpu 0 2 4 6 rcu_gp percpu 0 2 4 6 rcu_par_gp percpu 0 2 4 6 slub_flushwq percpu 0 2 4 6 netns ordered 8 8 8 8 8 ...}hjYsbah}(h]h ]h"]h$]h&]jjuh1jhhhMphj:hhubh)}(h-See the command's help message for more info.h]h/See the command’s help message for more info.}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj:hhubeh}(h]examining-configurationah ]h"]examining configurationah$]h&]uh1hhhhhhhhMkubh)}(hhh](h)}(h Monitoringh]h Monitoring}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hhhhhMubh)}(hEUse tools/workqueue/wq_monitor.py to monitor workqueue operations: ::h]hBUse tools/workqueue/wq_monitor.py to monitor workqueue operations:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj}hhubj)}(hX$ tools/workqueue/wq_monitor.py events total infl CPUtime CPUhog CMW/RPR mayday rescued events 18545 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38306 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29598 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - total infl CPUtime CPUhog CMW/RPR mayday rescued events 18548 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38322 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29603 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - ...h]hX$ tools/workqueue/wq_monitor.py events total infl CPUtime CPUhog CMW/RPR mayday rescued events 18545 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38306 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29598 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - total infl CPUtime CPUhog CMW/RPR mayday rescued events 18548 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38322 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29603 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - ...}hjsbah}(h]h ]h"]h$]h&]jjuh1jhhhMhj}hhubh)}(h-See the command's help message for more info.h]h/See the command’s help message for more info.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj}hhubeh}(h] monitoringah ]h"] monitoringah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h Debuggingh]h Debugging}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hBecause the work functions are executed by generic worker threads there are a few tricks needed to shed some light on misbehaving workqueue users.h]hBecause the work functions are executed by generic worker threads there are a few tricks needed to shed some light on misbehaving workqueue users.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(h1Worker threads show up in the process list as: ::h]h.Worker threads show up in the process list as:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(hX;root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0]h]hX;root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0]}hjsbah}(h]h ]h"]h$]h&]jjuh1jhhhMhjhhubh)}(h[If kworkers are going crazy (using too much cpu), there are two types of possible problems:h]h[If kworkers are going crazy (using too much cpu), there are two types of possible problems:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(hh1. Something being scheduled in rapid succession 2. A single work item that consumes lots of cpu cycles h]henumerated_list)}(hhh](j)}(h-Something being scheduled in rapid successionh]h)}(hjh]h-Something being scheduled in rapid succession}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h4A single work item that consumes lots of cpu cycles h]h)}(h3A single work item that consumes lots of cpu cyclesh]h3A single work item that consumes lots of cpu cycles}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj)ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]enumtypearabicprefixhsuffix.uh1j hj ubah}(h]h ]h"]h$]h&]uh1jhhhMhjhhubh)}(h.The first one can be tracked using tracing: ::h]h+The first one can be tracked using tracing:}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(h$ echo workqueue:workqueue_queue_work > /sys/kernel/tracing/set_event $ cat /sys/kernel/tracing/trace_pipe > out.txt (wait a few secs) ^Ch]h$ echo workqueue:workqueue_queue_work > /sys/kernel/tracing/set_event $ cat /sys/kernel/tracing/trace_pipe > out.txt (wait a few secs) ^C}hj`sbah}(h]h ]h"]h$]h&]jjuh1jhhhMhjhhubh)}(hIf something is busy looping on work queueing, it would be dominating the output and the offender can be determined with the work item function.h]hIf something is busy looping on work queueing, it would be dominating the output and the offender can be determined with the work item function.}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hvFor the second type of problems it should be possible to just check the stack trace of the offending worker thread. ::h]hsFor the second type of problems it should be possible to just check the stack trace of the offending worker thread.}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(h'$ cat /proc/THE_OFFENDING_KWORKER/stackh]h'$ cat /proc/THE_OFFENDING_KWORKER/stack}hjsbah}(h]h ]h"]h$]h&]jjuh1jhhhMhjhhubh)}(hHThe work item's function should be trivially visible in the stack trace.h]hJThe work item’s function should be trivially visible in the stack trace.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h] debuggingah ]h"] debuggingah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(hNon-reentrance Conditionsh]hNon-reentrance Conditions}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hzWorkqueue guarantees that a work item cannot be re-entrant if the following conditions hold after a work item gets queued:h]hzWorkqueue guarantees that a work item cannot be re-entrant if the following conditions hold after a work item gets queued:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(h1. The work function hasn't been changed. 2. No one queues the work item to another workqueue. 3. The work item hasn't been reinitiated. h]j)}(hhh](j)}(h&The work function hasn't been changed.h]h)}(hjh]h(The work function hasn’t been changed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h1No one queues the work item to another workqueue.h]h)}(hjh]h1No one queues the work item to another workqueue.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h'The work item hasn't been reinitiated. h]h)}(h&The work item hasn't been reinitiated.h]h(The work item hasn’t been reinitiated.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]jGjHjIhjJjKuh1j hjubah}(h]h ]h"]h$]h&]uh1jhhhMhjhhubh)}(hIn other words, if the above conditions hold, the work item is guaranteed to be executed by at most one worker system-wide at any given time.h]hIn other words, if the above conditions hold, the work item is guaranteed to be executed by at most one worker system-wide at any given time.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hNote that requeuing the work item (to the same queue) in the self function doesn't break these conditions, so it's safe to do. Otherwise, caution is required when breaking the conditions inside a work function.h]hNote that requeuing the work item (to the same queue) in the self function doesn’t break these conditions, so it’s safe to do. Otherwise, caution is required when breaking the conditions inside a work function.}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjhhubeh}(h]non-reentrance-conditionsah ]h"]non-reentrance conditionsah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h&Kernel Inline Documentations Referenceh]h&Kernel Inline Documentations Reference}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjJhhhhhMubhindex)}(hhh]h}(h]h ]h"]h$]h&]entries](singleworkqueue_attrs (C struct)c.workqueue_attrshNtauh1j[hjJhhhNhNubhdesc)}(hhh](hdesc_signature)}(hworkqueue_attrsh]hdesc_signature_line)}(hstruct workqueue_attrsh](hdesc_sig_keyword)}(hstructh]hstruct}(hjhhhNhNubah}(h]h ]kah"]h$]h&]uh1j}hjyhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKubhdesc_sig_space)}(h h]h }(hjhhhNhNubah}(h]h ]wah"]h$]h&]uh1jhjyhhhjhKubh desc_name)}(hworkqueue_attrsh]h desc_sig_name)}(hjuh]hworkqueue_attrs}(hjhhhNhNubah}(h]h ]nah"]h$]h&]uh1jhjubah}(h]h ](sig-namedescnameeh"]h$]h&]jjuh1jhjyhhhjhKubeh}(h]h ]h"]h$]h&]jj add_permalinkuh1jwsphinx_line_type declaratorhjshhhjhKubah}(h]jjah ](sig sig-objecteh"]h$]h&] is_multiline _toc_parts) _toc_namehuh1jqhjhKhjnhhubh desc_content)}(hhh]h)}(h"A struct for workqueue attributes.h]h"A struct for workqueue attributes.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjhhubah}(h]h ]h"]h$]h&]uh1jhjnhhhjhKubeh}(h]h ](cstructeh"]h$]h&]domainjobjtypejdesctypejnoindex noindexentrynocontentsentryuh1jlhhhjJhNhNubh container)}(hX**Definition**:: struct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; }; **Members** ``nice`` nice level ``cpumask`` allowed CPUs Work items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**. ``__pod_cpumask`` internal attribute used to create per-pod pools Internal use only. Per-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**. ``affn_strict`` affinity scope is strict If clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside. If set, workers are only allowed to run inside **__pod_cpumask**. ``affn_scope`` unbound CPU affinity scope CPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node. ``ordered`` work items must be executed one by one in queueing orderh](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubj)}(hstruct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; };h]hstruct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; };}hjsbah}(h]h ]h"]h$]h&]jjuh1jh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubh)}(h **Members**h]j)}(hj.h]hMembers}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubjS)}(hhh](jX)}(h``nice`` nice level h](j^)}(h``nice``h]j)}(hjMh]hnice}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjGubjw)}(hhh]h)}(h nice levelh]h nice level}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjbhKhjcubah}(h]h ]h"]h$]h&]uh1jvhjGubeh}(h]h ]h"]h$]h&]uh1jWhjbhKhjDubjX)}(h``cpumask`` allowed CPUs Work items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**. h](j^)}(h ``cpumask``h]j)}(hjh]hcpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubjw)}(hhh](h)}(h allowed CPUsh]h allowed CPUs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubh)}(hWork items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**.h](hWork items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same }(hjhhhNhNubj)}(h **cpumask**h]hcpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubeh}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjDubjX)}(hXh``__pod_cpumask`` internal attribute used to create per-pod pools Internal use only. Per-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**. h](j^)}(h``__pod_cpumask``h]j)}(hjh]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubjw)}(hhh](h)}(h/internal attribute used to create per-pod poolsh]h/internal attribute used to create per-pod pools}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubh)}(hInternal use only.h]hInternal use only.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubh)}(hXPer-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**.h](hPer-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh,’s. Whether the enforcement of a pool’s }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is strict depends on }(hjhhhNhNubj)}(h**affn_strict**h]h affn_strict}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubeh}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjDubjX)}(hX``affn_strict`` affinity scope is strict If clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside. If set, workers are only allowed to run inside **__pod_cpumask**. h](j^)}(h``affn_strict``h]j)}(hjoh]h affn_strict}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjiubjw)}(hhh](h)}(haffinity scope is stricth]haffinity scope is strict}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubh)}(hIf clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside.h](hRIf clear, workqueue will make a best-effort attempt at starting the worker inside }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh1 but the scheduler is free to migrate it outside.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubh)}(hAIf set, workers are only allowed to run inside **__pod_cpumask**.h](h/If set, workers are only allowed to run inside }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhKhjubeh}(h]h ]h"]h$]h&]uh1jvhjiubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjDubjX)}(hX``affn_scope`` unbound CPU affinity scope CPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node. h](j^)}(h``affn_scope``h]j)}(hjh]h affn_scope}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubjw)}(hhh](h)}(hunbound CPU affinity scopeh]hunbound CPU affinity scope}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubh)}(hXeCPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node.h](hXCPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting }(hjhhhNhNubj)}(h``WQ_AFFN_NUMA``h]h WQ_AFFN_NUMA}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhC makes the workqueue use a separate worker pool for each NUMA node.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjubeh}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjDubjX)}(hD``ordered`` work items must be executed one by one in queueing orderh](j^)}(h ``ordered``h]j)}(hjEh]hordered}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhj?ubjw)}(hhh]h)}(h8work items must be executed one by one in queueing orderh]h8work items must be executed one by one in queueing order}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhj[ubah}(h]h ]h"]h$]h&]uh1jvhj?ubeh}(h]h ]h"]h$]h&]uh1jWhjZhKhjDubeh}(h]h ]h"]h$]h&]uh1jRhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjJhhubh)}(h>This can be used to change attributes of an unbound workqueue.h]h>This can be used to change attributes of an unbound workqueue.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhKhjJhhubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhwork_pending (C macro)c.work_pendinghNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h work_pendingh]jx)}(h work_pendingh]j)}(h work_pendingh]j)}(hjh]h work_pending}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhM`ubah}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM`ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM`hjhhubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjhhhjhM`ubeh}(h]h ](jmacroeh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubh)}(h``work_pending (work)``h]j)}(hjh]hwork_pending (work)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMbhjJhhubj)}(h2Find out whether a work item is currently pending h]h)}(h1Find out whether a work item is currently pendingh]h1Find out whether a work item is currently pending}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhM`hjubah}(h]h ]h"]h$]h&]uh1jhj$hM`hjJhhubj)}(h4**Parameters** ``work`` The work item in questionh](h)}(h**Parameters**h]j)}(hj1h]h Parameters}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMdhj+ubjS)}(hhh]jX)}(h"``work`` The work item in questionh](j^)}(h``work``h]j)}(hjPh]hwork}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMfhjJubjw)}(hhh]h)}(hThe work item in questionh]hThe work item in question}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMahjfubah}(h]h ]h"]h$]h&]uh1jvhjJubeh}(h]h ]h"]h$]h&]uh1jWhjehMfhjGubah}(h]h ]h"]h$]h&]uh1jRhj+ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhdelayed_work_pending (C macro)c.delayed_work_pendinghNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hdelayed_work_pendingh]jx)}(hdelayed_work_pendingh]j)}(hdelayed_work_pendingh]j)}(hjh]hdelayed_work_pending}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMgubah}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMgubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMghjhhubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjhhhjhMgubeh}(h]h ](jmacroeh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubh)}(h``delayed_work_pending (w)``h]j)}(hjh]hdelayed_work_pending (w)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMihjJhhubj)}(h limits the number of in-flight work items for each CPU. e.g. }(hjshhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubhW of 1 indicates that each CPU can be executing at most one work item for the workqueue.}(hjshhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjDubh)}(hFor unbound workqueues, **max_active** limits the number of in-flight work items for the whole system. e.g. **max_active** of 16 indicates that there can be at most 16 work items executing for the workqueue in the whole system.h](hFor unbound workqueues, }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhF limits the number of in-flight work items for the whole system. e.g. }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhi of 16 indicates that there can be at most 16 work items executing for the workqueue in the whole system.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjDubh)}(hAs sharing the same active counter for an unbound workqueue across multiple NUMA nodes can be expensive, **max_active** is distributed to each NUMA node according to the proportion of the number of online CPUs and enforced independently.h](hiAs sharing the same active counter for an unbound workqueue across multiple NUMA nodes can be expensive, }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhv is distributed to each NUMA node according to the proportion of the number of online CPUs and enforced independently.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjDubh)}(hXDepending on online CPU distribution, a node may end up with per-node max_active which is significantly lower than **max_active**, which can lead to deadlocks if the per-node concurrency limit is lower than the maximum number of interdependent work items for the workqueue.h](hsDepending on online CPU distribution, a node may end up with per-node max_active which is significantly lower than }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, which can lead to deadlocks if the per-node concurrency limit is lower than the maximum number of interdependent work items for the workqueue.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjDubh)}(hX0To guarantee forward progress regardless of online CPU distribution, the concurrency limit on every node is guaranteed to be equal to or greater than min_active which is set to min(**max_active**, ``WQ_DFL_MIN_ACTIVE``). This means that the sum of per-node max_active's may be larger than **max_active**.h](hTo guarantee forward progress regardless of online CPU distribution, the concurrency limit on every node is guaranteed to be equal to or greater than min_active which is set to min(}(hjhhhNhNubj)}(h**max_active**h]h max_active}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h``WQ_DFL_MIN_ACTIVE``h]hWQ_DFL_MIN_ACTIVE}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhI). This means that the sum of per-node max_active’s may be larger than }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjDubh)}(haFor detailed information on ``WQ_*`` flags, please refer to Documentation/core-api/workqueue.rst.h](hFor detailed information on }(hj`hhhNhNubj)}(h``WQ_*``h]hWQ_*}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubh= flags, please refer to Documentation/core-api/workqueue.rst.}(hj`hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjDubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjDubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh on failure.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjDubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh(alloc_workqueue_lockdep_map (C function)c.alloc_workqueue_lockdep_maphNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hstruct workqueue_struct * alloc_workqueue_lockdep_map (const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h]jx)}(hstruct workqueue_struct *alloc_workqueue_lockdep_map(const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jalloc_workqueue_lockdep_mapsbc.alloc_workqueue_lockdep_mapasbuh1hhjhhhjhMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hjah]h*}(hj(hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(halloc_workqueue_lockdep_maph]j)}(hjh]halloc_workqueue_lockdep_map}(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h[(const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h](j)}(hconst char *fmth](j~)}(hjh]hconst}(hjThhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjPubj)}(h h]h }(hjahhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjPubj4)}(hcharh]hchar}(hjohhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjPubj)}(h h]h }(hj}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjPubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjPubj)}(hfmth]hfmt}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjPubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjLubj)}(hunsigned int flagsh](j4)}(hunsignedh]hunsigned}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hflagsh]hflags}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjLubj)}(hint max_activeh](j4)}(hinth]hint}(hj hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h max_activeh]h max_active}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjLubj)}(hstruct lockdep_map *lockdep_maph](j~)}(hjh]hstruct}(hj7 hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj3 ubj)}(h h]h }(hjD hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3 ubh)}(hhh]j)}(h lockdep_maph]h lockdep_map}(hjU hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjR ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjW modnameN classnameNjj)}j]jc.alloc_workqueue_lockdep_mapasbuh1hhj3 ubj)}(h h]h }(hjs hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3 ubj)}(hjah]h*}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3 ubj)}(h lockdep_maph]h lockdep_map}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3 ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjLubj)}(h...h]j)}(hjh]h...}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]noemphjjuh1jhjLubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h2allocate a workqueue with user-defined lockdep_maph]h2allocate a workqueue with user-defined lockdep_map}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhj hhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj jj jjjuh1jlhhhjJhNhNubj)}(hX&**Parameters** ``const char *fmt`` printf format for the name of the workqueue ``unsigned int flags`` WQ_* flags ``int max_active`` max in-flight work items, 0 for default ``struct lockdep_map *lockdep_map`` user-defined lockdep_map ``...`` args for **fmt** **Description** Same as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation. **Return** Pointer to the allocated workqueue on success, ``NULL`` on failure.h](h)}(h**Parameters**h]j)}(hj h]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhM hj ubjS)}(hhh](jX)}(h@``const char *fmt`` printf format for the name of the workqueue h](j^)}(h``const char *fmt``h]j)}(hj!h]hconst char *fmt}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhj !ubjw)}(hhh]h)}(h+printf format for the name of the workqueueh]h+printf format for the name of the workqueue}(hj*!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj&!hMhj'!ubah}(h]h ]h"]h$]h&]uh1jvhj !ubeh}(h]h ]h"]h$]h&]uh1jWhj&!hMhj!ubjX)}(h"``unsigned int flags`` WQ_* flags h](j^)}(h``unsigned int flags``h]j)}(hjJ!h]hunsigned int flags}(hjL!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjH!ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjD!ubjw)}(hhh]h)}(h WQ_* flagsh]h WQ_* flags}(hjc!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj_!hMhj`!ubah}(h]h ]h"]h$]h&]uh1jvhjD!ubeh}(h]h ]h"]h$]h&]uh1jWhj_!hMhj!ubjX)}(h;``int max_active`` max in-flight work items, 0 for default h](j^)}(h``int max_active``h]j)}(hj!h]hint max_active}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhj}!ubjw)}(hhh]h)}(h'max in-flight work items, 0 for defaulth]h'max in-flight work items, 0 for default}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!hMhj!ubah}(h]h ]h"]h$]h&]uh1jvhj}!ubeh}(h]h ]h"]h$]h&]uh1jWhj!hMhj!ubjX)}(h=``struct lockdep_map *lockdep_map`` user-defined lockdep_map h](j^)}(h#``struct lockdep_map *lockdep_map``h]j)}(hj!h]hstruct lockdep_map *lockdep_map}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhM hj!ubjw)}(hhh]h)}(huser-defined lockdep_maph]huser-defined lockdep_map}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!hM hj!ubah}(h]h ]h"]h$]h&]uh1jvhj!ubeh}(h]h ]h"]h$]h&]uh1jWhj!hM hj!ubjX)}(h``...`` args for **fmt** h](j^)}(h``...``h]j)}(hj!h]h...}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1j]h]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhM hj!ubjw)}(hhh]h)}(hargs for **fmt**h](h args for }(hj"hhhNhNubj)}(h**fmt**h]hfmt}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubeh}(h]h ]h"]h$]h&]uh1hhj "hM hj "ubah}(h]h ]h"]h$]h&]uh1jvhj!ubeh}(h]h ]h"]h$]h&]uh1jWhj "hM hj!ubeh}(h]h ]h"]h$]h&]uh1jRhj ubh)}(h**Description**h]j)}(hj>"h]h Description}(hj@"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<"ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhM hj ubh)}(hSame as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation.h]hSame as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation.}(hjT"hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhM hj ubh)}(h **Return**h]j)}(hje"h]hReturn}(hjg"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjc"ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhj ubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hj{"hhhNhNubj)}(h``NULL``h]hNULL}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{"ubh on failure.}(hj{"hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh-alloc_ordered_workqueue_lockdep_map (C macro)%c.alloc_ordered_workqueue_lockdep_maphNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h#alloc_ordered_workqueue_lockdep_maph]jx)}(h#alloc_ordered_workqueue_lockdep_maph]j)}(h#alloc_ordered_workqueue_lockdep_maph]j)}(hj"h]h#alloc_ordered_workqueue_lockdep_map}(hj"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj"hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMubah}(h]h ]h"]h$]h&]jjjuh1jwjjhj"hhhj"hMubah}(h]j"ah ](jjeh"]h$]h&]jj)jhuh1jqhj"hMhj"hhubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj"hhhj"hMubeh}(h]h ](jmacroeh"]h$]h&]jjjj"jj"jjjuh1jlhhhjJhNhNubh)}(hJ``alloc_ordered_workqueue_lockdep_map (fmt, flags, lockdep_map, args...)``h]j)}(hj"h]hFalloc_ordered_workqueue_lockdep_map (fmt, flags, lockdep_map, args...)}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:787: ./include/linux/workqueue.hhMhjJhhubj)}(h:ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM3hj::ubjS)}(hhh](jX)}(h``pool`` iteration cursor h](j^)}(h``pool``h]j)}(hj_:h]hpool}(hja:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]:ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM0hjY:ubjw)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hjx:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjt:hM0hju:ubah}(h]h ]h"]h$]h&]uh1jvhjY:ubeh}(h]h ]h"]h$]h&]uh1jWhjt:hM0hjV:ubjX)}(h"``pi`` integer used for iteration h](j^)}(h``pi``h]j)}(hj:h]hpi}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM1hj:ubjw)}(hhh]h)}(hinteger used for iterationh]hinteger used for iteration}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:hM1hj:ubah}(h]h ]h"]h$]h&]uh1jvhj:ubeh}(h]h ]h"]h$]h&]uh1jWhj:hM1hjV:ubeh}(h]h ]h"]h$]h&]uh1jRhj::ubh)}(h**Description**h]j)}(hj:h]h Description}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM3hj::ubh)}(hThis must be called either with wq_pool_mutex held or RCU read locked. If the pool needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pool stays online.h]hThis must be called either with wq_pool_mutex held or RCU read locked. If the pool needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pool stays online.}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM2hj::ubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6hj::ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhfor_each_pool_worker (C macro)c.for_each_pool_workerhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hfor_each_pool_workerh]jx)}(hfor_each_pool_workerh]j)}(hfor_each_pool_workerh]j)}(hj!;h]hfor_each_pool_worker}(hj+;hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj';ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj#;hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM@ubah}(h]h ]h"]h$]h&]jjjuh1jwjjhj;hhhj>;hM@ubah}(h]j;ah ](jjeh"]h$]h&]jj)jhuh1jqhj>;hM@hj;hhubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj;hhhj>;hM@ubeh}(h]h ](jmacroeh"]h$]h&]jjjjW;jjW;jjjuh1jlhhhjJhNhNubh)}(h'``for_each_pool_worker (worker, pool)``h]j)}(hj];h]h#for_each_pool_worker (worker, pool)}(hj_;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[;ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMBhjJhhubj)}(h-iterate through all workers of a worker_pool h]h)}(h,iterate through all workers of a worker_poolh]h,iterate through all workers of a worker_pool}(hjw;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM@hjs;ubah}(h]h ]h"]h$]h&]uh1jhj;hM@hjJhhubj)}(h**Parameters** ``worker`` iteration cursor ``pool`` worker_pool to iterate workers of **Description** This must be called with wq_pool_attach_mutex. The if/else clause exists only for the lockdep assertion and can be ignored.h](h)}(h**Parameters**h]j)}(hj;h]h Parameters}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMDhj;ubjS)}(hhh](jX)}(h``worker`` iteration cursor h](j^)}(h ``worker``h]j)}(hj;h]hworker}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMAhj;ubjw)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj;hMAhj;ubah}(h]h ]h"]h$]h&]uh1jvhj;ubeh}(h]h ]h"]h$]h&]uh1jWhj;hMAhj;ubjX)}(h+``pool`` worker_pool to iterate workers of h](j^)}(h``pool``h]j)}(hj;h]hpool}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMBhj;ubjw)}(hhh]h)}(h!worker_pool to iterate workers ofh]h!worker_pool to iterate workers of}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj;hMBhj<ubah}(h]h ]h"]h$]h&]uh1jvhj;ubeh}(h]h ]h"]h$]h&]uh1jWhj;hMBhj;ubeh}(h]h ]h"]h$]h&]uh1jRhj;ubh)}(h**Description**h]j)}(hj%<h]h Description}(hj'<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#<ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMDhj;ubh)}(h.This must be called with wq_pool_attach_mutex.h]h.This must be called with wq_pool_attach_mutex.}(hj;<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMChj;ubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hjJ<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMEhj;ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhfor_each_pwq (C macro)c.for_each_pwqhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h for_each_pwqh]jx)}(h for_each_pwqh]j)}(h for_each_pwqh]j)}(hjs<h]h for_each_pwq}(hj}<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjy<ubah}(h]h ](jjeh"]h$]h&]jjuh1jhju<hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMOubah}(h]h ]h"]h$]h&]jjjuh1jwjjhjq<hhhj<hMOubah}(h]jl<ah ](jjeh"]h$]h&]jj)jhuh1jqhj<hMOhjn<hhubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjn<hhhj<hMOubeh}(h]h ](jmacroeh"]h$]h&]jjjj<jj<jjjuh1jlhhhjJhNhNubh)}(h``for_each_pwq (pwq, wq)``h]j)}(hj<h]hfor_each_pwq (pwq, wq)}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMQhjJhhubj)}(h?iterate through all pool_workqueues of the specified workqueue h]h)}(h>iterate through all pool_workqueues of the specified workqueueh]h>iterate through all pool_workqueues of the specified workqueue}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMOhj<ubah}(h]h ]h"]h$]h&]uh1jhj<hMOhjJhhubj)}(hXl**Parameters** ``pwq`` iteration cursor ``wq`` the target workqueue **Description** This must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online. The if/else clause exists only for the lockdep assertion and can be ignored.h](h)}(h**Parameters**h]j)}(hj<h]h Parameters}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMShj<ubjS)}(hhh](jX)}(h``pwq`` iteration cursor h](j^)}(h``pwq``h]j)}(hj=h]hpwq}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMPhj<ubjw)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=hMPhj=ubah}(h]h ]h"]h$]h&]uh1jvhj<ubeh}(h]h ]h"]h$]h&]uh1jWhj=hMPhj<ubjX)}(h``wq`` the target workqueue h](j^)}(h``wq``h]j)}(hj<=h]hwq}(hj>=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:=ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMQhj6=ubjw)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hjU=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjQ=hMQhjR=ubah}(h]h ]h"]h$]h&]uh1jvhj6=ubeh}(h]h ]h"]h$]h&]uh1jWhjQ=hMQhj<ubeh}(h]h ]h"]h$]h&]uh1jRhj<ubh)}(h**Description**h]j)}(hjw=h]h Description}(hjy=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhju=ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMShj<ubh)}(hThis must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online.h]hThis must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online.}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMRhj<ubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMVhj<ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh"worker_pool_assign_id (C function)c.worker_pool_assign_idhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h4int worker_pool_assign_id (struct worker_pool *pool)h]jx)}(h3int worker_pool_assign_id(struct worker_pool *pool)h](j4)}(hinth]hint}(hj=hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj=hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj=hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=hhhj=hMubj)}(hworker_pool_assign_idh]j)}(hworker_pool_assign_idh]hworker_pool_assign_id}(hj=hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj=hhhj=hMubj )}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj>ubj)}(h h]h }(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hj&>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj#>ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj(>modnameN classnameNjj)}j]j)}jj=sbc.worker_pool_assign_idasbuh1hhj>ubj)}(h h]h }(hjF>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubj)}(hjah]h*}(hjT>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubj)}(hpoolh]hpool}(hja>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj>ubah}(h]h ]h"]h$]h&]jjuh1j hj=hhhj=hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj=hhhj=hMubah}(h]j=ah ](jjeh"]h$]h&]jj)jhuh1jqhj=hMhj=hhubj)}(hhh]h)}(h%allocate ID and assign it to **pool**h](hallocate ID and assign it to }(hj>hhhNhNubj)}(h**pool**h]hpool}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj>hhubah}(h]h ]h"]h$]h&]uh1jhj=hhhj=hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj>jj>jjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct worker_pool *pool`` the pool pointer of interest **Description** Returns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.h](h)}(h**Parameters**h]j)}(hj>h]h Parameters}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj>ubjS)}(hhh]jX)}(h:``struct worker_pool *pool`` the pool pointer of interest h](j^)}(h``struct worker_pool *pool``h]j)}(hj>h]hstruct worker_pool *pool}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj>ubjw)}(hhh]h)}(hthe pool pointer of interesth]hthe pool pointer of interest}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj>hMhj>ubah}(h]h ]h"]h$]h&]uh1jvhj>ubeh}(h]h ]h"]h$]h&]uh1jWhj>hMhj>ubah}(h]h ]h"]h$]h&]uh1jRhj>ubh)}(h**Description**h]j)}(hj?h]h Description}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj>ubh)}(hfReturns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.h]hfReturns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.}(hj+?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj>ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh&unbound_effective_cpumask (C function)c.unbound_effective_cpumaskhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hHstruct cpumask * unbound_effective_cpumask (struct workqueue_struct *wq)h]jx)}(hFstruct cpumask *unbound_effective_cpumask(struct workqueue_struct *wq)h](j~)}(hjh]hstruct}(hjZ?hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjV?hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjh?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjV?hhhjg?hMubh)}(hhh]j)}(hcpumaskh]hcpumask}(hjy?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjv?ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj{?modnameN classnameNjj)}j]j)}junbound_effective_cpumasksbc.unbound_effective_cpumaskasbuh1hhjV?hhhjg?hMubj)}(h h]h }(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjV?hhhjg?hMubj)}(hjah]h*}(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjV?hhhjg?hMubj)}(hunbound_effective_cpumaskh]j)}(hj?h]hunbound_effective_cpumask}(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjV?hhhjg?hMubj )}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj?ubj)}(h h]h }(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj?modnameN classnameNjj)}j]j?c.unbound_effective_cpumaskasbuh1hhj?ubj)}(h h]h }(hj@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubj)}(hjah]h*}(hj@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubj)}(hwqh]hwq}(hj+@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj?ubah}(h]h ]h"]h$]h&]jjuh1j hjV?hhhjg?hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjR?hhhjg?hMubah}(h]jM?ah ](jjeh"]h$]h&]jj)jhuh1jqhjg?hMhjO?hhubj)}(hhh]h)}(h)effective cpumask of an unbound workqueueh]h)effective cpumask of an unbound workqueue}(hjU@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjR@hhubah}(h]h ]h"]h$]h&]uh1jhjO?hhhjg?hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjm@jjm@jjjuh1jlhhhjJhNhNubj)}(hX@**Parameters** ``struct workqueue_struct *wq`` workqueue of interest **Description** **wq->unbound_attrs->cpumask** contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.h](h)}(h**Parameters**h]j)}(hjw@h]h Parameters}(hjy@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhju@ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjq@ubjS)}(hhh]jX)}(h6``struct workqueue_struct *wq`` workqueue of interest h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj@h]hstruct workqueue_struct *wq}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj@ubjw)}(hhh]h)}(hworkqueue of interesth]hworkqueue of interest}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj@hMhj@ubah}(h]h ]h"]h$]h&]uh1jvhj@ubeh}(h]h ]h"]h$]h&]uh1jWhj@hMhj@ubah}(h]h ]h"]h$]h&]uh1jRhjq@ubh)}(h**Description**h]j)}(hj@h]h Description}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjq@ubh)}(h**wq->unbound_attrs->cpumask** contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.h](j)}(h**wq->unbound_attrs->cpumask**h]hwq->unbound_attrs->cpumask}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubh contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.}(hj@hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjq@ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhget_work_pool (C function)c.get_work_poolhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h=struct worker_pool * get_work_pool (struct work_struct *work)h]jx)}(h;struct worker_pool *get_work_pool(struct work_struct *work)h](j~)}(hjh]hstruct}(hj$AhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj AhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMeubj)}(h h]h }(hj2AhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj Ahhhj1AhMeubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjCAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@Aubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjEAmodnameN classnameNjj)}j]j)}j get_work_poolsbc.get_work_poolasbuh1hhj Ahhhj1AhMeubj)}(h h]h }(hjdAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj Ahhhj1AhMeubj)}(hjah]h*}(hjrAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj Ahhhj1AhMeubj)}(h get_work_poolh]j)}(hjaAh]h get_work_pool}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAubah}(h]h ](jjeh"]h$]h&]jjuh1jhj Ahhhj1AhMeubj )}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjAubj)}(h h]h }(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAubh)}(hhh]j)}(h work_structh]h work_struct}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjAmodnameN classnameNjj)}j]j_Ac.get_work_poolasbuh1hhjAubj)}(h h]h }(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAubj)}(hjah]h*}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAubj)}(hworkh]hwork}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjAubah}(h]h ]h"]h$]h&]jjuh1j hj Ahhhj1AhMeubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjAhhhj1AhMeubah}(h]jAah ](jjeh"]h$]h&]jj)jhuh1jqhj1AhMehjAhhubj)}(hhh]h)}(h7return the worker_pool a given work was associated withh]h7return the worker_pool a given work was associated with}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMehjBhhubah}(h]h ]h"]h$]h&]uh1jhjAhhhj1AhMeubeh}(h]h ](jfunctioneh"]h$]h&]jjjj7Bjj7Bjjjuh1jlhhhjJhNhNubj)}(hXi**Parameters** ``struct work_struct *work`` the work item of interest **Description** Pools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region. All fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online. **Return** The worker_pool **work** was last associated with. ``NULL`` if none.h](h)}(h**Parameters**h]j)}(hjABh]h Parameters}(hjCBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?Bubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMihj;BubjS)}(hhh]jX)}(h7``struct work_struct *work`` the work item of interest h](j^)}(h``struct work_struct *work``h]j)}(hj`Bh]hstruct work_struct *work}(hjbBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^Bubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMfhjZBubjw)}(hhh]h)}(hthe work item of interesth]hthe work item of interest}(hjyBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuBhMfhjvBubah}(h]h ]h"]h$]h&]uh1jvhjZBubeh}(h]h ]h"]h$]h&]uh1jWhjuBhMfhjWBubah}(h]h ]h"]h$]h&]uh1jRhj;Bubh)}(h**Description**h]j)}(hjBh]h Description}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhhj;Bubh)}(hPools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region.h]hPools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region.}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMghj;Bubh)}(hAll fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online.h]hAll fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online.}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMkhj;Bubh)}(h **Return**h]j)}(hjBh]hReturn}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMphj;Bubh)}(hEThe worker_pool **work** was last associated with. ``NULL`` if none.h](hThe worker_pool }(hjBhhhNhNubj)}(h**work**h]hwork}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubh was last associated with. }(hjBhhhNhNubj)}(h``NULL``h]hNULL}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubh if none.}(hjBhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMqhj;Bubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhworker_set_flags (C function)c.worker_set_flagshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hAvoid worker_set_flags (struct worker *worker, unsigned int flags)h]jx)}(h@void worker_set_flags(struct worker *worker, unsigned int flags)h](j4)}(hvoidh]hvoid}(hj:ChhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj6ChhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjIChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj6ChhhjHChMubj)}(hworker_set_flagsh]j)}(hworker_set_flagsh]hworker_set_flags}(hj[ChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjWCubah}(h]h ](jjeh"]h$]h&]jjuh1jhj6ChhhjHChMubj )}(h+(struct worker *worker, unsigned int flags)h](j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjwChhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjsCubj)}(h h]h }(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsCubh)}(hhh]j)}(hworkerh]hworker}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjCmodnameN classnameNjj)}j]j)}jj]Csbc.worker_set_flagsasbuh1hhjsCubj)}(h h]h }(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsCubj)}(hjah]h*}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsCubj)}(hworkerh]hworker}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsCubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjoCubj)}(hunsigned int flagsh](j4)}(hunsignedh]hunsigned}(hjChhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjCubj)}(h h]h }(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubj4)}(hinth]hint}(hjDhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjCubj)}(h h]h }(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubj)}(hflagsh]hflags}(hj!DhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjoCubeh}(h]h ]h"]h$]h&]jjuh1j hj6ChhhjHChMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj2ChhhjHChMubah}(h]j-Cah ](jjeh"]h$]h&]jj)jhuh1jqhjHChMhj/Chhubj)}(hhh]h)}(h2set worker flags and adjust nr_running accordinglyh]h2set worker flags and adjust nr_running accordingly}(hjKDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjHDhhubah}(h]h ]h"]h$]h&]uh1jhj/ChhhjHChMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjcDjjcDjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct worker *worker`` self ``unsigned int flags`` flags to set **Description** Set **flags** in **worker->flags** and adjust nr_running accordingly.h](h)}(h**Parameters**h]j)}(hjmDh]h Parameters}(hjoDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkDubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjgDubjS)}(hhh](jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hjDh]hstruct worker *worker}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjDubjw)}(hhh]h)}(hselfh]hself}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjDhMhjDubah}(h]h ]h"]h$]h&]uh1jvhjDubeh}(h]h ]h"]h$]h&]uh1jWhjDhMhjDubjX)}(h$``unsigned int flags`` flags to set h](j^)}(h``unsigned int flags``h]j)}(hjDh]hunsigned int flags}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjDubjw)}(hhh]h)}(h flags to seth]h flags to set}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjDhMhjDubah}(h]h ]h"]h$]h&]uh1jvhjDubeh}(h]h ]h"]h$]h&]uh1jWhjDhMhjDubeh}(h]h ]h"]h$]h&]uh1jRhjgDubh)}(h**Description**h]j)}(hjEh]h Description}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjgDubh)}(hESet **flags** in **worker->flags** and adjust nr_running accordingly.h](hSet }(hjEhhhNhNubj)}(h **flags**h]hflags}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubh in }(hjEhhhNhNubj)}(h**worker->flags**h]h worker->flags}(hj0EhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubh# and adjust nr_running accordingly.}(hjEhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjgDubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhworker_clr_flags (C function)c.worker_clr_flagshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hAvoid worker_clr_flags (struct worker *worker, unsigned int flags)h]jx)}(h@void worker_clr_flags(struct worker *worker, unsigned int flags)h](j4)}(hvoidh]hvoid}(hjiEhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjeEhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjxEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeEhhhjwEhMubj)}(hworker_clr_flagsh]j)}(hworker_clr_flagsh]hworker_clr_flags}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubah}(h]h ](jjeh"]h$]h&]jjuh1jhjeEhhhjwEhMubj )}(h+(struct worker *worker, unsigned int flags)h](j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjEubj)}(h h]h }(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubh)}(hhh]j)}(hworkerh]hworker}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjEmodnameN classnameNjj)}j]j)}jjEsbc.worker_clr_flagsasbuh1hhjEubj)}(h h]h }(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubj)}(hjah]h*}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubj)}(hworkerh]hworker}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubj)}(hunsigned int flagsh](j4)}(hunsignedh]hunsigned}(hjFhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjFubj)}(h h]h }(hj&FhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFubj4)}(hinth]hint}(hj4FhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjFubj)}(h h]h }(hjBFhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFubj)}(hflagsh]hflags}(hjPFhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubeh}(h]h ]h"]h$]h&]jjuh1j hjeEhhhjwEhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjaEhhhjwEhMubah}(h]j\Eah ](jjeh"]h$]h&]jj)jhuh1jqhjwEhMhj^Ehhubj)}(hhh]h)}(h4clear worker flags and adjust nr_running accordinglyh]h4clear worker flags and adjust nr_running accordingly}(hjzFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjwFhhubah}(h]h ]h"]h$]h&]uh1jhj^EhhhjwEhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjFjjFjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct worker *worker`` self ``unsigned int flags`` flags to clear **Description** Clear **flags** in **worker->flags** and adjust nr_running accordingly.h](h)}(h**Parameters**h]j)}(hjFh]h Parameters}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjFubjS)}(hhh](jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hjFh]hstruct worker *worker}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjFubjw)}(hhh]h)}(hselfh]hself}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhMhjFubah}(h]h ]h"]h$]h&]uh1jvhjFubeh}(h]h ]h"]h$]h&]uh1jWhjFhMhjFubjX)}(h&``unsigned int flags`` flags to clear h](j^)}(h``unsigned int flags``h]j)}(hjFh]hunsigned int flags}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjFubjw)}(hhh]h)}(hflags to clearh]hflags to clear}(hj GhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj GhMhj Gubah}(h]h ]h"]h$]h&]uh1jvhjFubeh}(h]h ]h"]h$]h&]uh1jWhj GhMhjFubeh}(h]h ]h"]h$]h&]uh1jRhjFubh)}(h**Description**h]j)}(hj/Gh]h Description}(hj1GhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-Gubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjFubh)}(hGClear **flags** in **worker->flags** and adjust nr_running accordingly.h](hClear }(hjEGhhhNhNubj)}(h **flags**h]hflags}(hjMGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEGubh in }(hjEGhhhNhNubj)}(h**worker->flags**h]h worker->flags}(hj_GhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEGubh# and adjust nr_running accordingly.}(hjEGhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjFubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhworker_enter_idle (C function)c.worker_enter_idlehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h.void worker_enter_idle (struct worker *worker)h]jx)}(h-void worker_enter_idle(struct worker *worker)h](j4)}(hvoidh]hvoid}(hjGhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjGhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGhhhjGhMubj)}(hworker_enter_idleh]j)}(hworker_enter_idleh]hworker_enter_idle}(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubah}(h]h ](jjeh"]h$]h&]jjuh1jhjGhhhjGhMubj )}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjGubj)}(h h]h }(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubh)}(hhh]j)}(hworkerh]hworker}(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjGmodnameN classnameNjj)}j]j)}jjGsbc.worker_enter_idleasbuh1hhjGubj)}(h h]h }(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubj)}(hjah]h*}(hj!HhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubj)}(hworkerh]hworker}(hj.HhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubah}(h]h ]h"]h$]h&]jjuh1j hjGhhhjGhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjGhhhjGhMubah}(h]jGah ](jjeh"]h$]h&]jj)jhuh1jqhjGhMhjGhhubj)}(hhh]h)}(henter idle stateh]henter idle state}(hjXHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjUHhhubah}(h]h ]h"]h$]h&]uh1jhjGhhhjGhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjpHjjpHjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct worker *worker`` worker which is entering idle state **Description** **worker** is entering idle state. Update stats and idle timer if necessary. LOCKING: raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjzHh]h Parameters}(hj|HhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxHubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtHubjS)}(hhh]jX)}(h>``struct worker *worker`` worker which is entering idle state h](j^)}(h``struct worker *worker``h]j)}(hjHh]hstruct worker *worker}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjHubjw)}(hhh]h)}(h#worker which is entering idle stateh]h#worker which is entering idle state}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjHhMhjHubah}(h]h ]h"]h$]h&]uh1jvhjHubeh}(h]h ]h"]h$]h&]uh1jWhjHhMhjHubah}(h]h ]h"]h$]h&]uh1jRhjtHubh)}(h**Description**h]j)}(hjHh]h Description}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtHubh)}(hM**worker** is entering idle state. Update stats and idle timer if necessary.h](j)}(h **worker**h]hworker}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubhC is entering idle state. Update stats and idle timer if necessary.}(hjHhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtHubh)}(h'LOCKING: raw_spin_lock_irq(pool->lock).h]h'LOCKING: raw_spin_lock_irq(pool->lock).}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtHubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhworker_leave_idle (C function)c.worker_leave_idlehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h.void worker_leave_idle (struct worker *worker)h]jx)}(h-void worker_leave_idle(struct worker *worker)h](j4)}(hvoidh]hvoid}(hj6IhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj2IhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM%ubj)}(h h]h }(hjEIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2IhhhjDIhM%ubj)}(hworker_leave_idleh]j)}(hworker_leave_idleh]hworker_leave_idle}(hjWIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjSIubah}(h]h ](jjeh"]h$]h&]jjuh1jhj2IhhhjDIhM%ubj )}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjsIhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjoIubj)}(h h]h }(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoIubh)}(hhh]j)}(hworkerh]hworker}(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjImodnameN classnameNjj)}j]j)}jjYIsbc.worker_leave_idleasbuh1hhjoIubj)}(h h]h }(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoIubj)}(hjah]h*}(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoIubj)}(hworkerh]hworker}(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoIubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjkIubah}(h]h ]h"]h$]h&]jjuh1j hj2IhhhjDIhM%ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj.IhhhjDIhM%ubah}(h]j)Iah ](jjeh"]h$]h&]jj)jhuh1jqhjDIhM%hj+Ihhubj)}(hhh]h)}(hleave idle stateh]hleave idle state}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM%hjIhhubah}(h]h ]h"]h$]h&]uh1jhj+IhhhjDIhM%ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjJjjJjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct worker *worker`` worker which is leaving idle state **Description** **worker** is leaving idle state. Update stats. LOCKING: raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjJh]h Parameters}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM)hjJubjS)}(hhh]jX)}(h=``struct worker *worker`` worker which is leaving idle state h](j^)}(h``struct worker *worker``h]j)}(hj7Jh]hstruct worker *worker}(hj9JhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj5Jubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM&hj1Jubjw)}(hhh]h)}(h"worker which is leaving idle stateh]h"worker which is leaving idle state}(hjPJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjLJhM&hjMJubah}(h]h ]h"]h$]h&]uh1jvhj1Jubeh}(h]h ]h"]h$]h&]uh1jWhjLJhM&hj.Jubah}(h]h ]h"]h$]h&]uh1jRhjJubh)}(h**Description**h]j)}(hjrJh]h Description}(hjtJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpJubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM(hjJubh)}(h0**worker** is leaving idle state. Update stats.h](j)}(h **worker**h]hworker}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubh& is leaving idle state. Update stats.}(hjJhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM'hjJubh)}(h'LOCKING: raw_spin_lock_irq(pool->lock).h]h'LOCKING: raw_spin_lock_irq(pool->lock).}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM)hjJubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh'find_worker_executing_work (C function)c.find_worker_executing_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h_struct worker * find_worker_executing_work (struct worker_pool *pool, struct work_struct *work)h]jx)}(h]struct worker *find_worker_executing_work(struct worker_pool *pool, struct work_struct *work)h](j~)}(hjh]hstruct}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjJhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM9ubj)}(h h]h }(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJhhhjJhM9ubh)}(hhh]j)}(hworkerh]hworker}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjJmodnameN classnameNjj)}j]j)}jfind_worker_executing_worksbc.find_worker_executing_workasbuh1hhjJhhhjJhM9ubj)}(h h]h }(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJhhhjJhM9ubj)}(hjah]h*}(hj"KhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJhhhjJhM9ubj)}(hfind_worker_executing_workh]j)}(hjKh]hfind_worker_executing_work}(hj3KhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/Kubah}(h]h ](jjeh"]h$]h&]jjuh1jhjJhhhjJhM9ubj )}(h4(struct worker_pool *pool, struct work_struct *work)h](j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hjNKhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjJKubj)}(h h]h }(hj[KhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJKubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjlKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiKubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjnKmodnameN classnameNjj)}j]jKc.find_worker_executing_workasbuh1hhjJKubj)}(h h]h }(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJKubj)}(hjah]h*}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJKubj)}(hpoolh]hpool}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJKubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjFKubj)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjKubj)}(h h]h }(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKubh)}(hhh]j)}(h work_structh]h work_struct}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjKmodnameN classnameNjj)}j]jKc.find_worker_executing_workasbuh1hhjKubj)}(h h]h }(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKubj)}(hjah]h*}(hjLhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKubj)}(hworkh]hwork}(hjLhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjFKubeh}(h]h ]h"]h$]h&]jjuh1j hjJhhhjJhM9ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjJhhhjJhM9ubah}(h]jJah ](jjeh"]h$]h&]jj)jhuh1jqhjJhM9hjJhhubj)}(hhh]h)}(h%find worker which is executing a workh]h%find worker which is executing a work}(hj?LhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM9hjbusy_hash** which is keyed by the address of **work**. For a worker to match, its current execution should match the address of **work** and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed. This is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency. This function checks the work item address and work function to avoid false positives. Note that this isn't complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there's only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function. **Context** raw_spin_lock_irq(pool->lock). **Return** Pointer to worker which is executing **work** if found, ``NULL`` otherwise.h](h)}(h**Parameters**h]j)}(hjaLh]h Parameters}(hjcLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_Lubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM=hj[LubjS)}(hhh](jX)}(h.``struct worker_pool *pool`` pool of interest h](j^)}(h``struct worker_pool *pool``h]j)}(hjLh]hstruct worker_pool *pool}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~Lubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM:hjzLubjw)}(hhh]h)}(hpool of interesth]hpool of interest}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjLhM:hjLubah}(h]h ]h"]h$]h&]uh1jvhjzLubeh}(h]h ]h"]h$]h&]uh1jWhjLhM:hjwLubjX)}(h5``struct work_struct *work`` work to find worker for h](j^)}(h``struct work_struct *work``h]j)}(hjLh]hstruct work_struct *work}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM;hjLubjw)}(hhh]h)}(hwork to find worker forh]hwork to find worker for}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjLhM;hjLubah}(h]h ]h"]h$]h&]uh1jvhjLubeh}(h]h ]h"]h$]h&]uh1jWhjLhM;hjwLubeh}(h]h ]h"]h$]h&]uh1jRhj[Lubh)}(h**Description**h]j)}(hjLh]h Description}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM=hj[Lubh)}(hXrFind a worker which is executing **work** on **pool** by searching **pool->busy_hash** which is keyed by the address of **work**. For a worker to match, its current execution should match the address of **work** and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed.h](h!Find a worker which is executing }(hj MhhhNhNubj)}(h**work**h]hwork}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj Mubh on }(hj MhhhNhNubj)}(h**pool**h]hpool}(hj$MhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj Mubh by searching }(hj MhhhNhNubj)}(h**pool->busy_hash**h]hpool->busy_hash}(hj6MhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj Mubh" which is keyed by the address of }(hj MhhhNhNubj)}(h**work**h]hwork}(hjHMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj MubhL. For a worker to match, its current execution should match the address of }(hj MhhhNhNubj)}(h**work**h]hwork}(hjZMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj Mubh and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed.}(hj MhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM<hj[Lubh)}(hXThis is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency.h]hXThis is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency.}(hjsMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMChj[Lubh)}(hXThis function checks the work item address and work function to avoid false positives. Note that this isn't complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there's only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function.h]hXThis function checks the work item address and work function to avoid false positives. Note that this isn’t complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there’s only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function.}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMJhj[Lubh)}(h **Context**h]j)}(hjMh]hContext}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMQhj[Lubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMRhj[Lubh)}(h **Return**h]j)}(hjMh]hReturn}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMThj[Lubh)}(hKPointer to worker which is executing **work** if found, ``NULL`` otherwise.h](h%Pointer to worker which is executing }(hjMhhhNhNubj)}(h**work**h]hwork}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubh if found, }(hjMhhhNhNubj)}(h``NULL``h]hNULL}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubh otherwise.}(hjMhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMUhj[Lubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhmove_linked_works (C function)c.move_linked_workshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hevoid move_linked_works (struct work_struct *work, struct list_head *head, struct work_struct **nextp)h]jx)}(hdvoid move_linked_works(struct work_struct *work, struct list_head *head, struct work_struct **nextp)h](j4)}(hvoidh]hvoid}(hj#NhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjNhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhubj)}(h h]h }(hj2NhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNhhhj1NhMhubj)}(hmove_linked_worksh]j)}(hmove_linked_worksh]hmove_linked_works}(hjDNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@Nubah}(h]h ](jjeh"]h$]h&]jjuh1jhjNhhhj1NhMhubj )}(hN(struct work_struct *work, struct list_head *head, struct work_struct **nextp)h](j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hj`NhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj\Nubj)}(h h]h }(hjmNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\Nubh)}(hhh]j)}(h work_structh]h work_struct}(hj~NhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{Nubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjNmodnameN classnameNjj)}j]j)}jjFNsbc.move_linked_worksasbuh1hhj\Nubj)}(h h]h }(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\Nubj)}(hjah]h*}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\Nubj)}(hworkh]hwork}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\Nubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjXNubj)}(hstruct list_head *headh](j~)}(hjh]hstruct}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjNubj)}(h h]h }(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubh)}(hhh]j)}(h list_headh]h list_head}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjNmodnameN classnameNjj)}j]jNc.move_linked_worksasbuh1hhjNubj)}(h h]h }(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubj)}(hjah]h*}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubj)}(hheadh]hhead}(hj)OhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjXNubj)}(hstruct work_struct **nextph](j~)}(hjh]hstruct}(hjBOhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj>Oubj)}(h h]h }(hjOOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>Oubh)}(hhh]j)}(h work_structh]h work_struct}(hj`OhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]Oubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjbOmodnameN classnameNjj)}j]jNc.move_linked_worksasbuh1hhj>Oubj)}(h h]h }(hj~OhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>Oubj)}(hjah]h*}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>Oubj)}(hjah]h*}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>Oubj)}(hnextph]hnextp}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>Oubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjXNubeh}(h]h ]h"]h$]h&]jjuh1j hjNhhhj1NhMhubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjNhhhj1NhMhubah}(h]jNah ](jjeh"]h$]h&]jj)jhuh1jqhj1NhMhhjNhhubj)}(hhh]h)}(hmove linked works to a listh]hmove linked works to a list}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhhjOhhubah}(h]h ]h"]h$]h&]uh1jhjNhhhj1NhMhubeh}(h]h ](jfunctioneh"]h$]h&]jjjjOjjOjjjuh1jlhhhjJhNhNubj)}(hX **Parameters** ``struct work_struct *work`` start of series of works to be scheduled ``struct list_head *head`` target list to append **work** to ``struct work_struct **nextp`` out parameter for nested worklist walking **Description** Schedule linked works starting from **work** to **head**. Work series to be scheduled starts at **work** and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on **nextp**. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjOh]h Parameters}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMlhjOubjS)}(hhh](jX)}(hF``struct work_struct *work`` start of series of works to be scheduled h](j^)}(h``struct work_struct *work``h]j)}(hjPh]hstruct work_struct *work}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMihj Pubjw)}(hhh]h)}(h(start of series of works to be scheduledh]h(start of series of works to be scheduled}(hj*PhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj&PhMihj'Pubah}(h]h ]h"]h$]h&]uh1jvhj Pubeh}(h]h ]h"]h$]h&]uh1jWhj&PhMihjPubjX)}(h=``struct list_head *head`` target list to append **work** to h](j^)}(h``struct list_head *head``h]j)}(hjJPh]hstruct list_head *head}(hjLPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHPubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMjhjDPubjw)}(hhh]h)}(h!target list to append **work** toh](htarget list to append }(hjcPhhhNhNubj)}(h**work**h]hwork}(hjkPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcPubh to}(hjcPhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj_PhMjhj`Pubah}(h]h ]h"]h$]h&]uh1jvhjDPubeh}(h]h ]h"]h$]h&]uh1jWhj_PhMjhjPubjX)}(hI``struct work_struct **nextp`` out parameter for nested worklist walking h](j^)}(h``struct work_struct **nextp``h]j)}(hjPh]hstruct work_struct **nextp}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMkhjPubjw)}(hhh]h)}(h)out parameter for nested worklist walkingh]h)out parameter for nested worklist walking}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjPhMkhjPubah}(h]h ]h"]h$]h&]uh1jvhjPubeh}(h]h ]h"]h$]h&]uh1jWhjPhMkhjPubeh}(h]h ]h"]h$]h&]uh1jRhjOubh)}(h**Description**h]j)}(hjPh]h Description}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMmhjOubh)}(hSchedule linked works starting from **work** to **head**. Work series to be scheduled starts at **work** and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on **nextp**.h](h$Schedule linked works starting from }(hjPhhhNhNubj)}(h**work**h]hwork}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubh to }(hjPhhhNhNubj)}(h**head**h]hhead}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubh(. Work series to be scheduled starts at }(hjPhhhNhNubj)}(h**work**h]hwork}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubht and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on }(hjPhhhNhNubj)}(h **nextp**h]hnextp}(hj$QhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubh.}(hjPhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMlhjOubh)}(h **Context**h]j)}(hj?Qh]hContext}(hjAQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=Qubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMqhjOubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjUQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMrhjOubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhassign_work (C function) c.assign_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h^bool assign_work (struct work_struct *work, struct worker *worker, struct work_struct **nextp)h]jx)}(h]bool assign_work(struct work_struct *work, struct worker *worker, struct work_struct **nextp)h](j4)}(hj&h]hbool}(hjQhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjQhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQhhhjQhMubj)}(h assign_workh]j)}(h assign_workh]h assign_work}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubah}(h]h ](jjeh"]h$]h&]jjuh1jhjQhhhjQhMubj )}(hM(struct work_struct *work, struct worker *worker, struct work_struct **nextp)h](j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjQubj)}(h h]h }(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubh)}(hhh]j)}(h work_structh]h work_struct}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjQmodnameN classnameNjj)}j]j)}jjQsb c.assign_workasbuh1hhjQubj)}(h h]h }(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubj)}(hjah]h*}(hj RhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubj)}(hworkh]hwork}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjQubj)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hj2RhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj.Rubj)}(h h]h }(hj?RhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.Rubh)}(hhh]j)}(hworkerh]hworker}(hjPRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMRubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjRRmodnameN classnameNjj)}j]jQ c.assign_workasbuh1hhj.Rubj)}(h h]h }(hjnRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.Rubj)}(hjah]h*}(hj|RhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.Rubj)}(hworkerh]hworker}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.Rubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjQubj)}(hstruct work_struct **nextph](j~)}(hjh]hstruct}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjRubj)}(h h]h }(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubh)}(hhh]j)}(h work_structh]h work_struct}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjRmodnameN classnameNjj)}j]jQ c.assign_workasbuh1hhjRubj)}(h h]h }(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubj)}(hjah]h*}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubj)}(hjah]h*}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubj)}(hnextph]hnextp}(hjShhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjQubeh}(h]h ]h"]h$]h&]jjuh1j hjQhhhjQhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj|QhhhjQhMubah}(h]jwQah ](jjeh"]h$]h&]jj)jhuh1jqhjQhMhjyQhhubj)}(hhh]h)}(h8assign a work item and its linked work items to a workerh]h8assign a work item and its linked work items to a worker}(hj0ShhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj-Shhubah}(h]h ]h"]h$]h&]uh1jhjyQhhhjQhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjHSjjHSjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work to assign ``struct worker *worker`` worker to assign to ``struct work_struct **nextp`` out parameter for nested worklist walking **Description** Assign **work** and its linked work items to **worker**. If **work** is already being executed by another worker in the same pool, it'll be punted there. If **nextp** is not NULL, it's updated to point to the next work of the last scheduled work. This allows assign_work() to be nested inside list_for_each_entry_safe(). Returns ``true`` if **work** was successfully assigned to **worker**. ``false`` if **work** was punted to another worker already executing it.h](h)}(h**Parameters**h]j)}(hjRSh]h Parameters}(hjTShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPSubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjLSubjS)}(hhh](jX)}(h,``struct work_struct *work`` work to assign h](j^)}(h``struct work_struct *work``h]j)}(hjqSh]hstruct work_struct *work}(hjsShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoSubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjkSubjw)}(hhh]h)}(hwork to assignh]hwork to assign}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhjShMhjSubah}(h]h ]h"]h$]h&]uh1jvhjkSubeh}(h]h ]h"]h$]h&]uh1jWhjShMhjhSubjX)}(h.``struct worker *worker`` worker to assign to h](j^)}(h``struct worker *worker``h]j)}(hjSh]hstruct worker *worker}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjSubjw)}(hhh]h)}(hworker to assign toh]hworker to assign to}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhjShMhjSubah}(h]h ]h"]h$]h&]uh1jvhjSubeh}(h]h ]h"]h$]h&]uh1jWhjShMhjhSubjX)}(hI``struct work_struct **nextp`` out parameter for nested worklist walking h](j^)}(h``struct work_struct **nextp``h]j)}(hjSh]hstruct work_struct **nextp}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjSubjw)}(hhh]h)}(h)out parameter for nested worklist walkingh]h)out parameter for nested worklist walking}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhjShMhjSubah}(h]h ]h"]h$]h&]uh1jvhjSubeh}(h]h ]h"]h$]h&]uh1jWhjShMhjhSubeh}(h]h ]h"]h$]h&]uh1jRhjLSubh)}(h**Description**h]j)}(hjTh]h Description}(hj ThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjLSubh)}(hAssign **work** and its linked work items to **worker**. If **work** is already being executed by another worker in the same pool, it'll be punted there.h](hAssign }(hj4ThhhNhNubj)}(h**work**h]hwork}(hjlock) **Return** The last work function ``current`` executed as a worker, NULL if it hasn't executed any work yet.h](h)}(h**Parameters**h]j)}(hj)\h]h Parameters}(hj+\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj'\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubjS)}(hhh]jX)}(hE``struct task_struct *task`` Task to retrieve last work function of. h](j^)}(h``struct task_struct *task``h]j)}(hjH\h]hstruct task_struct *task}(hjJ\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjF\ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjB\ubjw)}(hhh]h)}(h'Task to retrieve last work function of.h]h'Task to retrieve last work function of.}(hja\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj]\hMhj^\ubah}(h]h ]h"]h$]h&]uh1jvhjB\ubeh}(h]h ]h"]h$]h&]uh1jWhj]\hMhj?\ubah}(h]h ]h"]h$]h&]uh1jRhj#\ubh)}(h**Description**h]j)}(hj\h]h Description}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubh)}(hwDetermine the last function a worker executed. This is called from the scheduler to get a worker's last known identity.h]hyDetermine the last function a worker executed. This is called from the scheduler to get a worker’s last known identity.}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubh)}(hXThis function is called during schedule() when a kworker is going to sleep. It's used by psi to identify aggregation workers during dequeuing, to allow periodic aggregation to shut-off when that worker is the last task in the system or cgroup to go to sleep.h]hXThis function is called during schedule() when a kworker is going to sleep. It’s used by psi to identify aggregation workers during dequeuing, to allow periodic aggregation to shut-off when that worker is the last task in the system or cgroup to go to sleep.}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubh)}(hAs this function doesn't involve any workqueue-related locking, it only returns stable values when called from inside the scheduler's queuing and dequeuing paths, when **task**, which must be a kworker, is guaranteed to not be processing any works.h](hAs this function doesn’t involve any workqueue-related locking, it only returns stable values when called from inside the scheduler’s queuing and dequeuing paths, when }(hj\hhhNhNubj)}(h**task**h]htask}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubhH, which must be a kworker, is guaranteed to not be processing any works.}(hj\hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubh)}(h **Context**h]j)}(hj\h]hContext}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubh)}(hraw_spin_lock_irq(rq->lock)h]hraw_spin_lock_irq(rq->lock)}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubh)}(h **Return**h]j)}(hj]h]hReturn}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubh)}(haThe last work function ``current`` executed as a worker, NULL if it hasn't executed any work yet.h](hThe last work function }(hj]hhhNhNubj)}(h ``current``h]hcurrent}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubhA executed as a worker, NULL if it hasn’t executed any work yet.}(hj]hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#\ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhwq_node_nr_active (C function)c.wq_node_nr_activehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hTstruct wq_node_nr_active * wq_node_nr_active (struct workqueue_struct *wq, int node)h]jx)}(hRstruct wq_node_nr_active *wq_node_nr_active(struct workqueue_struct *wq, int node)h](j~)}(hjh]hstruct}(hjX]hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjT]hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjf]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjT]hhhje]hMubh)}(hhh]j)}(hwq_node_nr_activeh]hwq_node_nr_active}(hjw]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjt]ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjy]modnameN classnameNjj)}j]j)}jwq_node_nr_activesbc.wq_node_nr_activeasbuh1hhjT]hhhje]hMubj)}(h h]h }(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjT]hhhje]hMubj)}(hjah]h*}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjT]hhhje]hMubj)}(hwq_node_nr_activeh]j)}(hj]h]hwq_node_nr_active}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjT]hhhje]hMubj )}(h'(struct workqueue_struct *wq, int node)h](j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj]ubj)}(h h]h }(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj]modnameN classnameNjj)}j]j]c.wq_node_nr_activeasbuh1hhj]ubj)}(h h]h }(hj^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubj)}(hjah]h*}(hj^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubj)}(hwqh]hwq}(hj)^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj]ubj)}(hint nodeh](j4)}(hinth]hint}(hjB^hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj>^ubj)}(h h]h }(hjP^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>^ubj)}(hnodeh]hnode}(hj^^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>^ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj]ubeh}(h]h ]h"]h$]h&]jjuh1j hjT]hhhje]hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjP]hhhje]hMubah}(h]jK]ah ](jjeh"]h$]h&]jj)jhuh1jqhje]hMhjM]hhubj)}(hhh]h)}(h"Determine wq_node_nr_active to useh]h"Determine wq_node_nr_active to use}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj^hhubah}(h]h ]h"]h$]h&]uh1jhjM]hhhje]hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj^jj^jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue of interest ``int node`` NUMA node, can be ``NUMA_NO_NODE`` **Description** Determine wq_node_nr_active to use for **wq** on **node**. Returns: - ``NULL`` for per-cpu workqueues as they don't need to use shared nr_active. - node_nr_active[nr_node_ids] if **node** is ``NUMA_NO_NODE``. - Otherwise, node_nr_active[**node**].h](h)}(h**Parameters**h]j)}(hj^h]h Parameters}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj^ubjS)}(hhh](jX)}(h6``struct workqueue_struct *wq`` workqueue of interest h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj^h]hstruct workqueue_struct *wq}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj^ubjw)}(hhh]h)}(hworkqueue of interesth]hworkqueue of interest}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj^hM hj^ubah}(h]h ]h"]h$]h&]uh1jvhj^ubeh}(h]h ]h"]h$]h&]uh1jWhj^hM hj^ubjX)}(h0``int node`` NUMA node, can be ``NUMA_NO_NODE`` h](j^)}(h ``int node``h]j)}(hj_h]hint node}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj^ubjw)}(hhh]h)}(h"NUMA node, can be ``NUMA_NO_NODE``h](hNUMA node, can be }(hj_hhhNhNubj)}(h``NUMA_NO_NODE``h]h NUMA_NO_NODE}(hj#_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_ubeh}(h]h ]h"]h$]h&]uh1hhj_hM hj_ubah}(h]h ]h"]h$]h&]uh1jvhj^ubeh}(h]h ]h"]h$]h&]uh1jWhj_hM hj^ubeh}(h]h ]h"]h$]h&]uh1jRhj^ubh)}(h**Description**h]j)}(hjK_h]h Description}(hjM_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjI_ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj^ubh)}(hCDetermine wq_node_nr_active to use for **wq** on **node**. Returns:h](h'Determine wq_node_nr_active to use for }(hja_hhhNhNubj)}(h**wq**h]hwq}(hji_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhja_ubh on }(hja_hhhNhNubj)}(h**node**h]hnode}(hj{_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhja_ubh . Returns:}(hja_hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj^ubj )}(hhh](j)}(hL``NULL`` for per-cpu workqueues as they don't need to use shared nr_active. h]h)}(hK``NULL`` for per-cpu workqueues as they don't need to use shared nr_active.h](j)}(h``NULL``h]hNULL}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_ubhE for per-cpu workqueues as they don’t need to use shared nr_active.}(hj_hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj_ubah}(h]h ]h"]h$]h&]uh1jhj_ubj)}(h=node_nr_active[nr_node_ids] if **node** is ``NUMA_NO_NODE``. h]h)}(hahhubah}(h]h ]h"]h$]h&]uh1jhjA`hhhjZ`hM!ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjYajjYajjjuh1jlhhhjJhNhNubj)}(hX{**Parameters** ``struct workqueue_struct *wq`` workqueue to update ``int off_cpu`` CPU that's going down, -1 if a CPU is not going down **Description** Update **wq->node_nr_active**[]->max. **wq** must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between **wq->min_active** and max_active.h](h)}(h**Parameters**h]j)}(hjcah]h Parameters}(hjeahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaaubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM%hj]aubjS)}(hhh](jX)}(h4``struct workqueue_struct *wq`` workqueue to update h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjah]hstruct workqueue_struct *wq}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM"hj|aubjw)}(hhh]h)}(hworkqueue to updateh]hworkqueue to update}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhjahM"hjaubah}(h]h ]h"]h$]h&]uh1jvhj|aubeh}(h]h ]h"]h$]h&]uh1jWhjahM"hjyaubjX)}(hE``int off_cpu`` CPU that's going down, -1 if a CPU is not going down h](j^)}(h``int off_cpu``h]j)}(hjah]h int off_cpu}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM#hjaubjw)}(hhh]h)}(h4CPU that's going down, -1 if a CPU is not going downh]h6CPU that’s going down, -1 if a CPU is not going down}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhjahM#hjaubah}(h]h ]h"]h$]h&]uh1jvhjaubeh}(h]h ]h"]h$]h&]uh1jWhjahM#hjyaubeh}(h]h ]h"]h$]h&]uh1jRhj]aubh)}(h**Description**h]j)}(hjah]h Description}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM%hj]aubh)}(hUpdate **wq->node_nr_active**[]->max. **wq** must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between **wq->min_active** and max_active.h](hUpdate }(hj bhhhNhNubj)}(h%**wq->node_nr_active**[]->max. **wq**h]h!wq->node_nr_active**[]->max. **wq}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj bubh must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between }(hj bhhhNhNubj)}(h**wq->min_active**h]hwq->min_active}(hj&bhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj bubh and max_active.}(hj bhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM$hj]aubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhget_pwq (C function) c.get_pwqhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h)void get_pwq (struct pool_workqueue *pwq)h]jx)}(h(void get_pwq(struct pool_workqueue *pwq)h](j4)}(hvoidh]hvoid}(hj_bhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj[bhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMUubj)}(h h]h }(hjnbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj[bhhhjmbhMUubj)}(hget_pwqh]j)}(hget_pwqh]hget_pwq}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|bubah}(h]h ](jjeh"]h$]h&]jjuh1jhj[bhhhjmbhMUubj )}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjbubj)}(h h]h }(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjbmodnameN classnameNjj)}j]j)}jjbsb c.get_pwqasbuh1hhjbubj)}(h h]h }(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbubj)}(hjah]h*}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbubj)}(hpwqh]hpwq}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjbubah}(h]h ]h"]h$]h&]jjuh1j hj[bhhhjmbhMUubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjWbhhhjmbhMUubah}(h]jRbah ](jjeh"]h$]h&]jj)jhuh1jqhjmbhMUhjTbhhubj)}(hhh]h)}(h6get an extra reference on the specified pool_workqueueh]h6get an extra reference on the specified pool_workqueue}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMUhjchhubah}(h]h ]h"]h$]h&]uh1jhjTbhhhjmbhMUubeh}(h]h ](jfunctioneh"]h$]h&]jjjj7cjj7cjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue to get **Description** Obtain an extra reference on **pwq**. The caller should guarantee that **pwq** has positive refcnt and be holding the matching pool->lock.h](h)}(h**Parameters**h]j)}(hjAch]h Parameters}(hjCchhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?cubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMYhj;cubjS)}(hhh]jX)}(h5``struct pool_workqueue *pwq`` pool_workqueue to get h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hj`ch]hstruct pool_workqueue *pwq}(hjbchhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^cubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMVhjZcubjw)}(hhh]h)}(hpool_workqueue to geth]hpool_workqueue to get}(hjychhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuchMVhjvcubah}(h]h ]h"]h$]h&]uh1jvhjZcubeh}(h]h ]h"]h$]h&]uh1jWhjuchMVhjWcubah}(h]h ]h"]h$]h&]uh1jRhj;cubh)}(h**Description**h]j)}(hjch]h Description}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMXhj;cubh)}(hObtain an extra reference on **pwq**. The caller should guarantee that **pwq** has positive refcnt and be holding the matching pool->lock.h](hObtain an extra reference on }(hjchhhNhNubj)}(h**pwq**h]hpwq}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubh$. The caller should guarantee that }(hjchhhNhNubj)}(h**pwq**h]hpwq}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubh< has positive refcnt and be holding the matching pool->lock.}(hjchhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMWhj;cubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhput_pwq (C function) c.put_pwqhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h)void put_pwq (struct pool_workqueue *pwq)h]jx)}(h(void put_pwq(struct pool_workqueue *pwq)h](j4)}(hvoidh]hvoid}(hjdhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjdhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMcubj)}(h h]h }(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjdhhhjdhMcubj)}(hput_pwqh]j)}(hput_pwqh]hput_pwq}(hj%dhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj!dubah}(h]h ](jjeh"]h$]h&]jjuh1jhjdhhhjdhMcubj )}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjAdhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj=dubj)}(h h]h }(hjNdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=dubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hj_dhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\dubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjadmodnameN classnameNjj)}j]j)}jj'dsb c.put_pwqasbuh1hhj=dubj)}(h h]h }(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=dubj)}(hjah]h*}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=dubj)}(hpwqh]hpwq}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=dubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj9dubah}(h]h ]h"]h$]h&]jjuh1j hjdhhhjdhMcubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjchhhjdhMcubah}(h]jcah ](jjeh"]h$]h&]jj)jhuh1jqhjdhMchjchhubj)}(hhh]h)}(hput a pool_workqueue referenceh]hput a pool_workqueue reference}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMchjdhhubah}(h]h ]h"]h$]h&]uh1jhjchhhjdhMcubeh}(h]h ](jfunctioneh"]h$]h&]jjjjdjjdjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue to put **Description** Drop a reference of **pwq**. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.h](h)}(h**Parameters**h]j)}(hjdh]h Parameters}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMghjdubjS)}(hhh]jX)}(h5``struct pool_workqueue *pwq`` pool_workqueue to put h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjeh]hstruct pool_workqueue *pwq}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjeubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMdhjdubjw)}(hhh]h)}(hpool_workqueue to puth]hpool_workqueue to put}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhjehMdhjeubah}(h]h ]h"]h$]h&]uh1jvhjdubeh}(h]h ]h"]h$]h&]uh1jWhjehMdhjdubah}(h]h ]h"]h$]h&]uh1jRhjdubh)}(h**Description**h]j)}(hj@eh]h Description}(hjBehhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>eubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMfhjdubh)}(hDrop a reference of **pwq**. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.h](hDrop a reference of }(hjVehhhNhNubj)}(h**pwq**h]hpwq}(hj^ehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVeubho. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.}(hjVehhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMehjdubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhput_pwq_unlocked (C function)c.put_pwq_unlockedhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h2void put_pwq_unlocked (struct pool_workqueue *pwq)h]jx)}(h1void put_pwq_unlocked(struct pool_workqueue *pwq)h](j4)}(hvoidh]hvoid}(hjehhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjehhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMvubj)}(h h]h }(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjehhhjehMvubj)}(hput_pwq_unlockedh]j)}(hput_pwq_unlockedh]hput_pwq_unlocked}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubah}(h]h ](jjeh"]h$]h&]jjuh1jhjehhhjehMvubj )}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjeubj)}(h h]h }(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjemodnameN classnameNjj)}j]j)}jjesbc.put_pwq_unlockedasbuh1hhjeubj)}(h h]h }(hjfhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubj)}(hjah]h*}(hj fhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubj)}(hpwqh]hpwq}(hj-fhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjeubah}(h]h ]h"]h$]h&]jjuh1j hjehhhjehMvubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjehhhjehMvubah}(h]jeah ](jjeh"]h$]h&]jj)jhuh1jqhjehMvhjehhubj)}(hhh]h)}(h+put_pwq() with surrounding pool lock/unlockh]h+put_pwq() with surrounding pool lock/unlock}(hjWfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMvhjTfhhubah}(h]h ]h"]h$]h&]uh1jhjehhhjehMvubeh}(h]h ](jfunctioneh"]h$]h&]jjjjofjjofjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue to put (can be ``NULL``) **Description** put_pwq() with locking. This function also allows ``NULL`` **pwq**.h](h)}(h**Parameters**h]j)}(hjyfh]h Parameters}(hj{fhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMzhjsfubjS)}(hhh]jX)}(hG``struct pool_workqueue *pwq`` pool_workqueue to put (can be ``NULL``) h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjfh]hstruct pool_workqueue *pwq}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMwhjfubjw)}(hhh]h)}(h'pool_workqueue to put (can be ``NULL``)h](hpool_workqueue to put (can be }(hjfhhhNhNubj)}(h``NULL``h]hNULL}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubh)}(hjfhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjfhMwhjfubah}(h]h ]h"]h$]h&]uh1jvhjfubeh}(h]h ]h"]h$]h&]uh1jWhjfhMwhjfubah}(h]h ]h"]h$]h&]uh1jRhjsfubh)}(h**Description**h]j)}(hjfh]h Description}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMyhjsfubh)}(hDput_pwq() with locking. This function also allows ``NULL`` **pwq**.h](h3put_pwq() with locking. This function also allows }(hjfhhhNhNubj)}(h``NULL``h]hNULL}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubh }(hjfhhhNhNubj)}(h**pwq**h]hpwq}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubh.}(hjfhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMxhjsfubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh!pwq_tryinc_nr_active (C function)c.pwq_tryinc_nr_activehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hAbool pwq_tryinc_nr_active (struct pool_workqueue *pwq, bool fill)h]jx)}(h@bool pwq_tryinc_nr_active(struct pool_workqueue *pwq, bool fill)h](j4)}(hj&h]hbool}(hjNghhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjJghhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj\ghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJghhhj[ghMubj)}(hpwq_tryinc_nr_activeh]j)}(hpwq_tryinc_nr_activeh]hpwq_tryinc_nr_active}(hjnghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjjgubah}(h]h ](jjeh"]h$]h&]jjuh1jhjJghhhj[ghMubj )}(h'(struct pool_workqueue *pwq, bool fill)h](j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjgubj)}(h h]h }(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjgmodnameN classnameNjj)}j]j)}jjpgsbc.pwq_tryinc_nr_activeasbuh1hhjgubj)}(h h]h }(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubj)}(hjah]h*}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubj)}(hpwqh]hpwq}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjgubj)}(h bool fillh](j4)}(hj&h]hbool}(hjghhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjgubj)}(h h]h }(hj hhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubj)}(hfillh]hfill}(hjhhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjgubeh}(h]h ]h"]h$]h&]jjuh1j hjJghhhj[ghMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjFghhhj[ghMubah}(h]jAgah ](jjeh"]h$]h&]jj)jhuh1jqhj[ghMhjCghhubj)}(hhh]h)}(h$Try to increment nr_active for a pwqh]h$Try to increment nr_active for a pwq}(hjAhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj>hhhubah}(h]h ]h"]h$]h&]uh1jhjCghhhj[ghMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjYhjjYhjjjuh1jlhhhjJhNhNubj)}(hX-**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue of interest ``bool fill`` max_active may have increased, try to increase concurrency level **Description** Try to increment nr_active for **pwq**. Returns ``true`` if an nr_active count is successfully obtained. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjchh]h Parameters}(hjehhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjahubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj]hubjS)}(hhh](jX)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjhh]hstruct pool_workqueue *pwq}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj|hubjw)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhMhjhubah}(h]h ]h"]h$]h&]uh1jvhj|hubeh}(h]h ]h"]h$]h&]uh1jWhjhhMhjyhubjX)}(hO``bool fill`` max_active may have increased, try to increase concurrency level h](j^)}(h ``bool fill``h]j)}(hjhh]h bool fill}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhubjw)}(hhh]h)}(h@max_active may have increased, try to increase concurrency levelh]h@max_active may have increased, try to increase concurrency level}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhMhjhubah}(h]h ]h"]h$]h&]uh1jvhjhubeh}(h]h ]h"]h$]h&]uh1jWhjhhMhjyhubeh}(h]h ]h"]h$]h&]uh1jRhj]hubh)}(h**Description**h]j)}(hjhh]h Description}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj]hubh)}(h}Try to increment nr_active for **pwq**. Returns ``true`` if an nr_active count is successfully obtained. ``false`` otherwise.h](hTry to increment nr_active for }(hj ihhhNhNubj)}(h**pwq**h]hpwq}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhj iubh . Returns }(hj ihhhNhNubj)}(h``true``h]htrue}(hj&ihhhNhNubah}(h]h ]h"]h$]h&]uh1jhj iubh1 if an nr_active count is successfully obtained. }(hj ihhhNhNubj)}(h ``false``h]hfalse}(hj8ihhhNhNubah}(h]h ]h"]h$]h&]uh1jhj iubh otherwise.}(hj ihhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj]hubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh(pwq_activate_first_inactive (C function)c.pwq_activate_first_inactivehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hHbool pwq_activate_first_inactive (struct pool_workqueue *pwq, bool fill)h]jx)}(hGbool pwq_activate_first_inactive(struct pool_workqueue *pwq, bool fill)h](j4)}(hj&h]hbool}(hjqihhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjmihhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmihhhj~ihMubj)}(hpwq_activate_first_inactiveh]j)}(hpwq_activate_first_inactiveh]hpwq_activate_first_inactive}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubah}(h]h ](jjeh"]h$]h&]jjuh1jhjmihhhj~ihMubj )}(h'(struct pool_workqueue *pwq, bool fill)h](j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjiubj)}(h h]h }(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjimodnameN classnameNjj)}j]j)}jjisbc.pwq_activate_first_inactiveasbuh1hhjiubj)}(h h]h }(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubj)}(hjah]h*}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubj)}(hpwqh]hpwq}(hjjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjiubj)}(h bool fillh](j4)}(hj&h]hbool}(hjjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjjubj)}(h h]h }(hj,jhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjjubj)}(hfillh]hfill}(hj:jhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjiubeh}(h]h ]h"]h$]h&]jjuh1j hjmihhhj~ihMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjiihhhj~ihMubah}(h]jdiah ](jjeh"]h$]h&]jj)jhuh1jqhj~ihMhjfihhubj)}(hhh]h)}(h.Activate the first inactive work item on a pwqh]h.Activate the first inactive work item on a pwq}(hjdjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjajhhubah}(h]h ]h"]h$]h&]uh1jhjfihhhj~ihMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj|jjj|jjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue of interest ``bool fill`` max_active may have increased, try to increase concurrency level **Description** Activate the first inactive work item of **pwq** if available and allowed by max_active limit. Returns ``true`` if an inactive work item has been activated. ``false`` if no inactive work item is found or max_active limit is reached.h](h)}(h**Parameters**h]j)}(hjjh]h Parameters}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjjubjS)}(hhh](jX)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjjh]hstruct pool_workqueue *pwq}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjjubjw)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjjhMhjjubah}(h]h ]h"]h$]h&]uh1jvhjjubeh}(h]h ]h"]h$]h&]uh1jWhjjhMhjjubjX)}(hO``bool fill`` max_active may have increased, try to increase concurrency level h](j^)}(h ``bool fill``h]j)}(hjjh]h bool fill}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjjubjw)}(hhh]h)}(h@max_active may have increased, try to increase concurrency levelh]h@max_active may have increased, try to increase concurrency level}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjjhMhjjubah}(h]h ]h"]h$]h&]uh1jvhjjubeh}(h]h ]h"]h$]h&]uh1jWhjjhMhjjubeh}(h]h ]h"]h$]h&]uh1jRhjjubh)}(h**Description**h]j)}(hjkh]h Description}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjjubh)}(h^Activate the first inactive work item of **pwq** if available and allowed by max_active limit.h](h)Activate the first inactive work item of }(hj/khhhNhNubj)}(h**pwq**h]hpwq}(hj7khhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/kubh. if available and allowed by max_active limit.}(hj/khhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjjubh)}(hReturns ``true`` if an inactive work item has been activated. ``false`` if no inactive work item is found or max_active limit is reached.h](hReturns }(hjPkhhhNhNubj)}(h``true``h]htrue}(hjXkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPkubh. if an inactive work item has been activated. }(hjPkhhhNhNubj)}(h ``false``h]hfalse}(hjjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPkubhB if no inactive work item is found or max_active limit is reached.}(hjPkhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhunplug_oldest_pwq (C function)c.unplug_oldest_pwqhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h4void unplug_oldest_pwq (struct workqueue_struct *wq)h]jx)}(h3void unplug_oldest_pwq(struct workqueue_struct *wq)h](j4)}(hvoidh]hvoid}(hjkhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjkhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkhhhjkhM ubj)}(hunplug_oldest_pwqh]j)}(hunplug_oldest_pwqh]hunplug_oldest_pwq}(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubah}(h]h ](jjeh"]h$]h&]jjuh1jhjkhhhjkhM ubj )}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjkubj)}(h h]h }(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjlmodnameN classnameNjj)}j]j)}jjksbc.unplug_oldest_pwqasbuh1hhjkubj)}(h h]h }(hjlhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubj)}(hjah]h*}(hj,lhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubj)}(hwqh]hwq}(hj9lhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjkubah}(h]h ]h"]h$]h&]jjuh1j hjkhhhjkhM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjkhhhjkhM ubah}(h]jkah ](jjeh"]h$]h&]jj)jhuh1jqhjkhM hjkhhubj)}(hhh]h)}(h unplug the oldest pool_workqueueh]h unplug the oldest pool_workqueue}(hjclhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj`lhhubah}(h]h ]h"]h$]h&]uh1jhjkhhhjkhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj{ljj{ljjjuh1jlhhhjJhNhNubj)}(hX!**Parameters** ``struct workqueue_struct *wq`` workqueue_struct where its oldest pwq is to be unplugged **Description** This function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering:: dfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6 When the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq's are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.h](h)}(h**Parameters**h]j)}(hjlh]h Parameters}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjlubjS)}(hhh]jX)}(hY``struct workqueue_struct *wq`` workqueue_struct where its oldest pwq is to be unplugged h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjlh]hstruct workqueue_struct *wq}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjlubjw)}(hhh]h)}(h8workqueue_struct where its oldest pwq is to be unpluggedh]h8workqueue_struct where its oldest pwq is to be unplugged}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjlhM hjlubah}(h]h ]h"]h$]h&]uh1jvhjlubeh}(h]h ]h"]h$]h&]uh1jWhjlhM hjlubah}(h]h ]h"]h$]h&]uh1jRhjlubh)}(h**Description**h]j)}(hjlh]h Description}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjlubh)}(hThis function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering::h]hThis function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering:}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjlubj)}(hdfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6h]hdfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6}hjmsbah}(h]h ]h"]h$]h&]jjuh1jhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjlubh)}(hWhen the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq's are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.h]hWhen the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq’s are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjlubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh&node_activate_pending_pwq (C function)c.node_activate_pending_pwqhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h_void node_activate_pending_pwq (struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h]jx)}(h^void node_activate_pending_pwq(struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h](j4)}(hvoidh]hvoid}(hjBmhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj>mhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM4ubj)}(h h]h }(hjQmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>mhhhjPmhM4ubj)}(hnode_activate_pending_pwqh]j)}(hnode_activate_pending_pwqh]hnode_activate_pending_pwq}(hjcmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj_mubah}(h]h ](jjeh"]h$]h&]jjuh1jhj>mhhhjPmhM4ubj )}(h@(struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h](j)}(hstruct wq_node_nr_active *nnah](j~)}(hjh]hstruct}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj{mubj)}(h h]h }(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{mubh)}(hhh]j)}(hwq_node_nr_activeh]hwq_node_nr_active}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmmodnameN classnameNjj)}j]j)}jjemsbc.node_activate_pending_pwqasbuh1hhj{mubj)}(h h]h }(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{mubj)}(hjah]h*}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{mubj)}(hnnah]hnna}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{mubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjwmubj)}(hstruct worker_pool *caller_poolh](j~)}(hjh]hstruct}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjmubj)}(h h]h }(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj nubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjnmodnameN classnameNjj)}j]jmc.node_activate_pending_pwqasbuh1hhjmubj)}(h h]h }(hj-nhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubj)}(hjah]h*}(hj;nhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubj)}(h caller_poolh]h caller_pool}(hjHnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjwmubeh}(h]h ]h"]h$]h&]jjuh1j hj>mhhhjPmhM4ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj:mhhhjPmhM4ubah}(h]j5mah ](jjeh"]h$]h&]jj)jhuh1jqhjPmhM4hj7mhhubj)}(hhh]h)}(h-Activate a pending pwq on a wq_node_nr_activeh]h-Activate a pending pwq on a wq_node_nr_active}(hjrnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM4hjonhhubah}(h]h ]h"]h$]h&]uh1jhj7mhhhjPmhM4ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjnjjnjjjuh1jlhhhjJhNhNubj)}(hXT**Parameters** ``struct wq_node_nr_active *nna`` wq_node_nr_active to activate a pending pwq for ``struct worker_pool *caller_pool`` worker_pool the caller is locking **Description** Activate a pwq in **nna->pending_pwqs**. Called with **caller_pool** locked. **caller_pool** may be unlocked and relocked to lock other worker_pools.h](h)}(h**Parameters**h]j)}(hjnh]h Parameters}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM8hjnubjS)}(hhh](jX)}(hR``struct wq_node_nr_active *nna`` wq_node_nr_active to activate a pending pwq for h](j^)}(h!``struct wq_node_nr_active *nna``h]j)}(hjnh]hstruct wq_node_nr_active *nna}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM5hjnubjw)}(hhh]h)}(h/wq_node_nr_active to activate a pending pwq forh]h/wq_node_nr_active to activate a pending pwq for}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjnhM5hjnubah}(h]h ]h"]h$]h&]uh1jvhjnubeh}(h]h ]h"]h$]h&]uh1jWhjnhM5hjnubjX)}(hF``struct worker_pool *caller_pool`` worker_pool the caller is locking h](j^)}(h#``struct worker_pool *caller_pool``h]j)}(hjnh]hstruct worker_pool *caller_pool}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6hjnubjw)}(hhh]h)}(h!worker_pool the caller is lockingh]h!worker_pool the caller is locking}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhjohM6hjoubah}(h]h ]h"]h$]h&]uh1jvhjnubeh}(h]h ]h"]h$]h&]uh1jWhjohM6hjnubeh}(h]h ]h"]h$]h&]uh1jRhjnubh)}(h**Description**h]j)}(hj'oh]h Description}(hj)ohhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%oubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM8hjnubh)}(hActivate a pwq in **nna->pending_pwqs**. Called with **caller_pool** locked. **caller_pool** may be unlocked and relocked to lock other worker_pools.h](hActivate a pwq in }(hj=ohhhNhNubj)}(h**nna->pending_pwqs**h]hnna->pending_pwqs}(hjEohhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=oubh. Called with }(hj=ohhhNhNubj)}(h**caller_pool**h]h caller_pool}(hjWohhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=oubh locked. }(hj=ohhhNhNubj)}(h**caller_pool**h]h caller_pool}(hjiohhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=oubh9 may be unlocked and relocked to lock other worker_pools.}(hj=ohhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM7hjnubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhpwq_dec_nr_active (C function)c.pwq_dec_nr_activehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h3void pwq_dec_nr_active (struct pool_workqueue *pwq)h]jx)}(h2void pwq_dec_nr_active(struct pool_workqueue *pwq)h](j4)}(hvoidh]hvoid}(hjohhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjohhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjohhhjohMubj)}(hpwq_dec_nr_activeh]j)}(hpwq_dec_nr_activeh]hpwq_dec_nr_active}(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoubah}(h]h ](jjeh"]h$]h&]jjuh1jhjohhhjohMubj )}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjoubj)}(h h]h }(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjomodnameN classnameNjj)}j]j)}jjosbc.pwq_dec_nr_activeasbuh1hhjoubj)}(h h]h }(hjphhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoubj)}(hjah]h*}(hj+phhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoubj)}(hpwqh]hpwq}(hj8phhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjoubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjoubah}(h]h ]h"]h$]h&]jjuh1j hjohhhjohMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjohhhjohMubah}(h]joah ](jjeh"]h$]h&]jj)jhuh1jqhjohMhjohhubj)}(hhh]h)}(hRetire an active counth]hRetire an active count}(hjbphhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj_phhubah}(h]h ]h"]h$]h&]uh1jhjohhhjohMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjzpjjzpjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue of interest **Description** Decrement **pwq**'s nr_active and try to activate the first inactive work item. For unbound workqueues, this function may temporarily drop **pwq->pool->lock**.h](h)}(h**Parameters**h]j)}(hjph]h Parameters}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj~pubjS)}(hhh]jX)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjph]hstruct pool_workqueue *pwq}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjpubjw)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1hhjphMhjpubah}(h]h ]h"]h$]h&]uh1jvhjpubeh}(h]h ]h"]h$]h&]uh1jWhjphMhjpubah}(h]h ]h"]h$]h&]uh1jRhj~pubh)}(h**Description**h]j)}(hjph]h Description}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj~pubh)}(hDecrement **pwq**'s nr_active and try to activate the first inactive work item. For unbound workqueues, this function may temporarily drop **pwq->pool->lock**.h](h Decrement }(hjphhhNhNubj)}(h**pwq**h]hpwq}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh|’s nr_active and try to activate the first inactive work item. For unbound workqueues, this function may temporarily drop }(hjphhhNhNubj)}(h**pwq->pool->lock**h]hpwq->pool->lock}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh.}(hjphhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj~pubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh!pwq_dec_nr_in_flight (C function)c.pwq_dec_nr_in_flighthNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hOvoid pwq_dec_nr_in_flight (struct pool_workqueue *pwq, unsigned long work_data)h]jx)}(hNvoid pwq_dec_nr_in_flight(struct pool_workqueue *pwq, unsigned long work_data)h](j4)}(hvoidh]hvoid}(hjGqhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjCqhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjVqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCqhhhjUqhMubj)}(hpwq_dec_nr_in_flighth]j)}(hpwq_dec_nr_in_flighth]hpwq_dec_nr_in_flight}(hjhqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjdqubah}(h]h ](jjeh"]h$]h&]jjuh1jhjCqhhhjUqhMubj )}(h5(struct pool_workqueue *pwq, unsigned long work_data)h](j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjqubj)}(h h]h }(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjqmodnameN classnameNjj)}j]j)}jjjqsbc.pwq_dec_nr_in_flightasbuh1hhjqubj)}(h h]h }(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubj)}(hjah]h*}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubj)}(hpwqh]hpwq}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj|qubj)}(hunsigned long work_datah](j4)}(hunsignedh]hunsigned}(hjqhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjqubj)}(h h]h }(hjrhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubj4)}(hlongh]hlong}(hjrhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjqubj)}(h h]h }(hj rhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubj)}(h work_datah]h work_data}(hj.rhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj|qubeh}(h]h ]h"]h$]h&]jjuh1j hjCqhhhjUqhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj?qhhhjUqhMubah}(h]j:qah ](jjeh"]h$]h&]jj)jhuh1jqhjUqhMhjpool->lock** and thus should be called after all other state updates for the in-flight work item is complete. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjzrh]h Parameters}(hj|rhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxrubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtrubjS)}(hhh](jX)}(h/``struct pool_workqueue *pwq`` pwq of interest h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjrh]hstruct pool_workqueue *pwq}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjrubjw)}(hhh]h)}(hpwq of interesth]hpwq of interest}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjrhMhjrubah}(h]h ]h"]h$]h&]uh1jvhjrubeh}(h]h ]h"]h$]h&]uh1jWhjrhMhjrubjX)}(hC``unsigned long work_data`` work_data of work which left the queue h](j^)}(h``unsigned long work_data``h]j)}(hjrh]hunsigned long work_data}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjrubjw)}(hhh]h)}(h&work_data of work which left the queueh]h&work_data of work which left the queue}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjrhMhjrubah}(h]h ]h"]h$]h&]uh1jvhjrubeh}(h]h ]h"]h$]h&]uh1jWhjrhMhjrubeh}(h]h ]h"]h$]h&]uh1jRhjtrubh)}(h**Description**h]j)}(hj sh]h Description}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhj subah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtrubh)}(h~A work either has completed or is removed from pending queue, decrement nr_in_flight of its pwq and handle workqueue flushing.h]h~A work either has completed or is removed from pending queue, decrement nr_in_flight of its pwq and handle workqueue flushing.}(hj#shhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtrubh)}(h**NOTE**h]j)}(hj4sh]hNOTE}(hj6shhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2subah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtrubh)}(hFor unbound workqueues, this function may temporarily drop **pwq->pool->lock** and thus should be called after all other state updates for the in-flight work item is complete.h](h;For unbound workqueues, this function may temporarily drop }(hjJshhhNhNubj)}(h**pwq->pool->lock**h]hpwq->pool->lock}(hjRshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJsubha and thus should be called after all other state updates for the in-flight work item is complete.}(hjJshhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtrubh)}(h **Context**h]j)}(hjmsh]hContext}(hjoshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjksubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtrubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjtrubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh try_to_grab_pending (C function)c.try_to_grab_pendinghNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hXint try_to_grab_pending (struct work_struct *work, u32 cflags, unsigned long *irq_flags)h]jx)}(hWint try_to_grab_pending(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j4)}(hinth]hint}(hjshhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjshhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjshhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjshhhjshMubj)}(htry_to_grab_pendingh]j)}(htry_to_grab_pendingh]htry_to_grab_pending}(hjshhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsubah}(h]h ](jjeh"]h$]h&]jjuh1jhjshhhjshMubj )}(h@(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjshhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjsubj)}(h h]h }(hjshhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsubh)}(hhh]j)}(h work_structh]h work_struct}(hj thhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj tubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjtmodnameN classnameNjj)}j]j)}jjssbc.try_to_grab_pendingasbuh1hhjsubj)}(h h]h }(hj-thhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsubj)}(hjah]h*}(hj;thhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsubj)}(hworkh]hwork}(hjHthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjsubj)}(h u32 cflagsh](h)}(hhh]j)}(hu32h]hu32}(hjdthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjatubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjftmodnameN classnameNjj)}j]j)tc.try_to_grab_pendingasbuh1hhj]tubj)}(h h]h }(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]tubj)}(hcflagsh]hcflags}(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]tubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjsubj)}(hunsigned long *irq_flagsh](j4)}(hunsignedh]hunsigned}(hjthhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjtubj)}(h h]h }(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjtubj4)}(hlongh]hlong}(hjthhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjtubj)}(h h]h }(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjtubj)}(hjah]h*}(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjtubj)}(h irq_flagsh]h irq_flags}(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjtubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjsubeh}(h]h ]h"]h$]h&]jjuh1j hjshhhjshMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjshhhjshMubah}(h]jsah ](jjeh"]h$]h&]jj)jhuh1jqhjshMhjshhubj)}(hhh]h)}(h-steal work item from worklist and disable irqh]h-steal work item from worklist and disable irq}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjuhhubah}(h]h ]h"]h$]h&]uh1jhjshhhjshMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj0ujj0ujjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work item to steal ``u32 cflags`` ``WORK_CANCEL_`` flags ``unsigned long *irq_flags`` place to store irq state **Description** Try to grab PENDING bit of **work**. This function can handle **work** in any stable state - idle, on timer or on worklist. ======== ================================================================ 1 if **work** was pending and we successfully stole PENDING 0 if **work** was idle and we claimed PENDING -EAGAIN if PENDING couldn't be grabbed at the moment, safe to busy-retry ======== ================================================================ **Note** On >= 0 return, the caller owns **work**'s PENDING bit. To avoid getting interrupted while holding PENDING and **work** off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time. On successful return, >= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(***irq_flags**). This function is safe to call from any context including IRQ handler.h](h)}(h**Parameters**h]j)}(hj:uh]h Parameters}(hj= 0 return, the caller owns **work**'s PENDING bit. To avoid getting interrupted while holding PENDING and **work** off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time.h](h On >= 0 return, the caller owns }(hj|whhhNhNubj)}(h**work**h]hwork}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|wubhJ’s PENDING bit. To avoid getting interrupted while holding PENDING and }(hj|whhhNhNubj)}(h**work**h]hwork}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|wubh off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time.}(hj|whhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj4uubh)}(hOn successful return, >= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(***irq_flags**).h](hsOn successful return, >= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(}(hjwhhhNhNubj)}(h***irq_flags**h]h *irq_flags}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubh).}(hjwhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj4uubh)}(hEThis function is safe to call from any context including IRQ handler.h]hEThis function is safe to call from any context including IRQ handler.}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj4uubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhwork_grab_pending (C function)c.work_grab_pendinghNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hWbool work_grab_pending (struct work_struct *work, u32 cflags, unsigned long *irq_flags)h]jx)}(hVbool work_grab_pending(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j4)}(hj&h]hbool}(hjwhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjwhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMZubj)}(h h]h }(hj xhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjwhhhj xhMZubj)}(hwork_grab_pendingh]j)}(hwork_grab_pendingh]hwork_grab_pending}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubah}(h]h ](jjeh"]h$]h&]jjuh1jhjwhhhj xhMZubj )}(h@(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hj;xhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj7xubj)}(h h]h }(hjHxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7xubh)}(hhh]j)}(h work_structh]h work_struct}(hjYxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjVxubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj[xmodnameN classnameNjj)}j]j)}jj!xsbc.work_grab_pendingasbuh1hhj7xubj)}(h h]h }(hjyxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7xubj)}(hjah]h*}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7xubj)}(hworkh]hwork}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7xubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj3xubj)}(h u32 cflagsh](h)}(hhh]j)}(hu32h]hu32}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjxmodnameN classnameNjj)}j]juxc.work_grab_pendingasbuh1hhjxubj)}(h h]h }(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubj)}(hcflagsh]hcflags}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj3xubj)}(hunsigned long *irq_flagsh](j4)}(hunsignedh]hunsigned}(hjxhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjxubj)}(h h]h }(hjyhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubj4)}(hlongh]hlong}(hjyhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjxubj)}(h h]h }(hjyhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubj)}(hjah]h*}(hj-yhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubj)}(h irq_flagsh]h irq_flags}(hj:yhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj3xubeh}(h]h ]h"]h$]h&]jjuh1j hjwhhhj xhMZubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjwhhhj xhMZubah}(h]jwah ](jjeh"]h$]h&]jj)jhuh1jqhj xhMZhjwhhubj)}(hhh]h)}(h-steal work item from worklist and disable irqh]h-steal work item from worklist and disable irq}(hjdyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMZhjayhhubah}(h]h ]h"]h$]h&]uh1jhjwhhhj xhMZubeh}(h]h ](jfunctioneh"]h$]h&]jjjj|yjj|yjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work item to steal ``u32 cflags`` ``WORK_CANCEL_`` flags ``unsigned long *irq_flags`` place to store IRQ state **Description** Grab PENDING bit of **work**. **work** can be in any stable state - idle, on timer or on worklist. Can be called from any context. IRQ is disabled on return with IRQ state stored in ***irq_flags**. The caller is responsible for re-enabling it using local_irq_restore(). Returns ``true`` if **work** was pending. ``false`` if idle.h](h)}(h**Parameters**h]j)}(hjyh]h Parameters}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM^hjyubjS)}(hhh](jX)}(h0``struct work_struct *work`` work item to steal h](j^)}(h``struct work_struct *work``h]j)}(hjyh]hstruct work_struct *work}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM[hjyubjw)}(hhh]h)}(hwork item to stealh]hwork item to steal}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjyhM[hjyubah}(h]h ]h"]h$]h&]uh1jvhjyubeh}(h]h ]h"]h$]h&]uh1jWhjyhM[hjyubjX)}(h&``u32 cflags`` ``WORK_CANCEL_`` flags h](j^)}(h``u32 cflags``h]j)}(hjyh]h u32 cflags}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM\hjyubjw)}(hhh]h)}(h``WORK_CANCEL_`` flagsh](j)}(h``WORK_CANCEL_``h]h WORK_CANCEL_}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubh flags}(hjyhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjyhM\hjyubah}(h]h ]h"]h$]h&]uh1jvhjyubeh}(h]h ]h"]h$]h&]uh1jWhjyhM\hjyubjX)}(h6``unsigned long *irq_flags`` place to store IRQ state h](j^)}(h``unsigned long *irq_flags``h]j)}(hj%zh]hunsigned long *irq_flags}(hj'zhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#zubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM]hjzubjw)}(hhh]h)}(hplace to store IRQ stateh]hplace to store IRQ state}(hj>zhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:zhM]hj;zubah}(h]h ]h"]h$]h&]uh1jvhjzubeh}(h]h ]h"]h$]h&]uh1jWhj:zhM]hjyubeh}(h]h ]h"]h$]h&]uh1jRhjyubh)}(h**Description**h]j)}(hj`zh]h Description}(hjbzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^zubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM_hjyubh)}(hbGrab PENDING bit of **work**. **work** can be in any stable state - idle, on timer or on worklist.h](hGrab PENDING bit of }(hjvzhhhNhNubj)}(h**work**h]hwork}(hj~zhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvzubh. }(hjvzhhhNhNubj)}(h**work**h]hwork}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvzubh< can be in any stable state - idle, on timer or on worklist.}(hjvzhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM^hjyubh)}(hCan be called from any context. IRQ is disabled on return with IRQ state stored in ***irq_flags**. The caller is responsible for re-enabling it using local_irq_restore().h](hSCan be called from any context. IRQ is disabled on return with IRQ state stored in }(hjzhhhNhNubj)}(h***irq_flags**h]h *irq_flags}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjzubhI. The caller is responsible for re-enabling it using local_irq_restore().}(hjzhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMahjyubh)}(h{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj+{hhhj={hMvubj)}(h insert_workh]j)}(h insert_workh]h insert_work}(hjP{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjL{ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj+{hhhj={hMvubj )}(hh(struct pool_workqueue *pwq, struct work_struct *work, struct list_head *head, unsigned int extra_flags)h](j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjl{hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjh{ubj)}(h h]h }(hjy{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjh{ubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj{modnameN classnameNjj)}j]j)}jjR{sb c.insert_workasbuh1hhjh{ubj)}(h h]h }(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjh{ubj)}(hjah]h*}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjh{ubj)}(hpwqh]hpwq}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjh{ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjd{ubj)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj{ubj)}(h h]h }(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{ubh)}(hhh]j)}(h work_structh]h work_struct}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj{modnameN classnameNjj)}j]j{ c.insert_workasbuh1hhj{ubj)}(h h]h }(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{ubj)}(hjah]h*}(hj(|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{ubj)}(hworkh]hwork}(hj5|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjd{ubj)}(hstruct list_head *headh](j~)}(hjh]hstruct}(hjN|hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjJ|ubj)}(h h]h }(hj[|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJ|ubh)}(hhh]j)}(h list_headh]h list_head}(hjl|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhji|ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjn|modnameN classnameNjj)}j]j{ c.insert_workasbuh1hhjJ|ubj)}(h h]h }(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJ|ubj)}(hjah]h*}(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJ|ubj)}(hheadh]hhead}(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJ|ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjd{ubj)}(hunsigned int extra_flagsh](j4)}(hunsignedh]hunsigned}(hj|hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj|ubj)}(h h]h }(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|ubj4)}(hinth]hint}(hj|hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj|ubj)}(h h]h }(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|ubj)}(h extra_flagsh]h extra_flags}(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjd{ubeh}(h]h ]h"]h$]h&]jjuh1j hj+{hhhj={hMvubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj'{hhhj={hMvubah}(h]j"{ah ](jjeh"]h$]h&]jj)jhuh1jqhj={hMvhj${hhubj)}(hhh]h)}(hinsert a work into a poolh]hinsert a work into a pool}(hj }hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMvhj}hhubah}(h]h ]h"]h$]h&]uh1jhj${hhhj={hMvubeh}(h]h ](jfunctioneh"]h$]h&]jjjj8}jj8}jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct pool_workqueue *pwq`` pwq **work** belongs to ``struct work_struct *work`` work to insert ``struct list_head *head`` insertion point ``unsigned int extra_flags`` extra WORK_STRUCT_* flags to set **Description** Insert **work** which belongs to **pwq** after **head**. **extra_flags** is or'd to work_struct flags. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjB}h]h Parameters}(hjD}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@}ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMzhj<}ubjS)}(hhh](jX)}(h7``struct pool_workqueue *pwq`` pwq **work** belongs to h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hja}h]hstruct pool_workqueue *pwq}(hjc}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_}ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMwhj[}ubjw)}(hhh]h)}(hpwq **work** belongs toh](hpwq }(hjz}hhhNhNubj)}(h**work**h]hwork}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjz}ubh belongs to}(hjz}hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjv}hMwhjw}ubah}(h]h ]h"]h$]h&]uh1jvhj[}ubeh}(h]h ]h"]h$]h&]uh1jWhjv}hMwhjX}ubjX)}(h,``struct work_struct *work`` work to insert h](j^)}(h``struct work_struct *work``h]j)}(hj}h]hstruct work_struct *work}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMxhj}ubjw)}(hhh]h)}(hwork to inserth]hwork to insert}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hMxhj}ubah}(h]h ]h"]h$]h&]uh1jvhj}ubeh}(h]h ]h"]h$]h&]uh1jWhj}hMxhjX}ubjX)}(h+``struct list_head *head`` insertion point h](j^)}(h``struct list_head *head``h]j)}(hj}h]hstruct list_head *head}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMyhj}ubjw)}(hhh]h)}(hinsertion pointh]hinsertion point}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hMyhj}ubah}(h]h ]h"]h$]h&]uh1jvhj}ubeh}(h]h ]h"]h$]h&]uh1jWhj}hMyhjX}ubjX)}(h>``unsigned int extra_flags`` extra WORK_STRUCT_* flags to set h](j^)}(h``unsigned int extra_flags``h]j)}(hj~h]hunsigned int extra_flags}(hj ~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMzhj~ubjw)}(hhh]h)}(h extra WORK_STRUCT_* flags to seth]h extra WORK_STRUCT_* flags to set}(hj7~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj3~hMzhj4~ubah}(h]h ]h"]h$]h&]uh1jvhj~ubeh}(h]h ]h"]h$]h&]uh1jWhj3~hMzhjX}ubeh}(h]h ]h"]h$]h&]uh1jRhj<}ubh)}(h**Description**h]j)}(hjY~h]h Description}(hj[~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjW~ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM|hj<}ubh)}(hgInsert **work** which belongs to **pwq** after **head**. **extra_flags** is or'd to work_struct flags.h](hInsert }(hjo~hhhNhNubj)}(h**work**h]hwork}(hjw~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjo~ubh which belongs to }(hjo~hhhNhNubj)}(h**pwq**h]hpwq}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjo~ubh after }(hjo~hhhNhNubj)}(h**head**h]hhead}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjo~ubh. }(hjo~hhhNhNubj)}(h**extra_flags**h]h extra_flags}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjo~ubh is or’d to work_struct flags.}(hjo~hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM{hj<}ubh)}(h **Context**h]j)}(hj~h]hContext}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM~hj<}ubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj<}ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhqueue_work_on (C function)c.queue_work_onhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hSbool queue_work_on (int cpu, struct workqueue_struct *wq, struct work_struct *work)h]jx)}(hRbool queue_work_on(int cpu, struct workqueue_struct *wq, struct work_struct *work)h](j4)}(hj&h]hbool}(hj hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM? ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhjhM? ubj)}(h queue_work_onh]j)}(h queue_work_onh]h queue_work_on}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj hhhjhM? ubj )}(h@(int cpu, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hint cpuh](j4)}(hinth]hint}(hjIhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjEubj)}(h h]h }(hjWhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubj)}(hcpuh]hcpu}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjAubj)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjzubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jj/sbc.queue_work_onasbuh1hhjzubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjAubj)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]jc.queue_work_onasbuh1hhjubj)}(h h]h }(hj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hworkh]hwork}(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjAubeh}(h]h ]h"]h$]h&]jjuh1j hj hhhjhM? ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM? ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM? hjhhubj)}(hhh]h)}(hqueue work on specific cpuh]hqueue work on specific cpu}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM? hjnhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM? ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct work_struct *work`` work to queue **Description** We queue the work to a specific CPU, the caller must ensure it can't go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMC hjubjS)}(hhh](jX)}(h*``int cpu`` CPU number to execute work on h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM@ hjubjw)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hjˀhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjǀhM@ hjȀubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjǀhM@ hjubjX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMA hjubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMA hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMA hjubjX)}(h+``struct work_struct *work`` work to queue h](j^)}(h``struct work_struct *work``h]j)}(hj$h]hstruct work_struct *work}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMB hjubjw)}(hhh]h)}(h work to queueh]h work to queue}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj9hMB hj:ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj9hMB hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj_h]h Description}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMD hjubh)}(hXWe queue the work to a specific CPU, the caller must ensure it can't go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat.h]hXWe queue the work to a specific CPU, the caller must ensure it can’t go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat.}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMC hjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMI hjubh)}(hA``false`` if **work** was already on a queue, ``true`` otherwise.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was already on a queue, }(hjhhhNhNubj)}(h``true``h]htrue}(hjāhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMJ hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh!select_numa_node_cpu (C function)c.select_numa_node_cpuhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h#int select_numa_node_cpu (int node)h]jx)}(h"int select_numa_node_cpu(int node)h](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM` ubj)}(h h]h }(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj hM` ubj)}(hselect_numa_node_cpuh]j)}(hselect_numa_node_cpuh]hselect_numa_node_cpu}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhj hM` ubj )}(h (int node)h]j)}(hint nodeh](j4)}(hinth]hint}(hj:hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj6ubj)}(h h]h }(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj6ubj)}(hnodeh]hnode}(hjVhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj6ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj2ubah}(h]h ]h"]h$]h&]jjuh1j hjhhhj hM` ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhj hM` ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj hM` hjhhubj)}(hhh]h)}(hSelect a CPU based on NUMA nodeh]hSelect a CPU based on NUMA node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM` hj}hhubah}(h]h ]h"]h$]h&]uh1jhjhhhj hM` ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX\**Parameters** ``int node`` NUMA node ID that we want to select a CPU from **Description** This function will attempt to find a "random" cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMd hjubjS)}(hhh]jX)}(h<``int node`` NUMA node ID that we want to select a CPU from h](j^)}(h ``int node``h]j)}(hjh]hint node}(hjÂhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMa hjubjw)}(hhh]h)}(h.NUMA node ID that we want to select a CPU fromh]h.NUMA node ID that we want to select a CPU from}(hjڂhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjւhMa hjׂubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjւhMa hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMc hjubh)}(hThis function will attempt to find a "random" cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.h]hXThis function will attempt to find a “random” cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMb hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhqueue_work_node (C function)c.queue_work_nodehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hVbool queue_work_node (int node, struct workqueue_struct *wq, struct work_struct *work)h]jx)}(hUbool queue_work_node(int node, struct workqueue_struct *wq, struct work_struct *work)h](j4)}(hj&h]hbool}(hjAhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj=hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM} ubj)}(h h]h }(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=hhhjNhM} ubj)}(hqueue_work_nodeh]j)}(hqueue_work_nodeh]hqueue_work_node}(hjahhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj=hhhjNhM} ubj )}(hA(int node, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hint nodeh](j4)}(hinth]hint}(hj}hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjyubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjyubj)}(hnodeh]hnode}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjyubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjuubj)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjЃhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj̓ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj҃modnameN classnameNjj)}j]j)}jjcsbc.queue_work_nodeasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hwqh]hwq}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjuubj)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj ubj)}(h h]h }(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubh)}(hhh]j)}(h work_structh]h work_struct}(hjBhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjDmodnameN classnameNjj)}j]jc.queue_work_nodeasbuh1hhj ubj)}(h h]h }(hj`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(hjah]h*}(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(hworkh]hwork}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjuubeh}(h]h ]h"]h$]h&]jjuh1j hj=hhhjNhM} ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj9hhhjNhM} ubah}(h]j4ah ](jjeh"]h$]h&]jj)jhuh1jqhjNhM} hj6hhubj)}(hhh]h)}(h2queue work on a "random" cpu for a given NUMA nodeh]h6queue work on a “random” cpu for a given NUMA node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM} hjhhubah}(h]h ]h"]h$]h&]uh1jhj6hhhjNhM} ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hXH**Parameters** ``int node`` NUMA node that we are targeting the work for ``struct workqueue_struct *wq`` workqueue to use ``struct work_struct *work`` work to queue **Description** We queue the work to a "random" CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node. This function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior. Currently the "random" CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise.h](h)}(h**Parameters**h]j)}(hjDŽh]h Parameters}(hjɄhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjńubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjS)}(hhh](jX)}(h:``int node`` NUMA node that we are targeting the work for h](j^)}(h ``int node``h]j)}(hjh]hint node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM~ hjubjw)}(hhh]h)}(h,NUMA node that we are targeting the work forh]h,NUMA node that we are targeting the work for}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM~ hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM~ hj݄ubjX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hM hj5ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj4hM hj݄ubjX)}(h+``struct work_struct *work`` work to queue h](j^)}(h``struct work_struct *work``h]j)}(hjXh]hstruct work_struct *work}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjRubjw)}(hhh]h)}(h work to queueh]h work to queue}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjmhM hjnubah}(h]h ]h"]h$]h&]uh1jvhjRubeh}(h]h ]h"]h$]h&]uh1jWhjmhM hj݄ubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hWe queue the work to a "random" CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node.h]hWe queue the work to a “random” CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hThis function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior.h]hThis function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hCurrently the "random" CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU.h]hCurrently the “random” CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU.}(hjDžhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hj؅h]hReturn}(hjڅhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjօubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hA``false`` if **work** was already on a queue, ``true`` otherwise.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was already on a queue, }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh"queue_delayed_work_on (C function)c.queue_delayed_work_onhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hrbool queue_delayed_work_on (int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]jx)}(hqbool queue_delayed_work_on(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j4)}(hj&h]hbool}(hjOhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjKhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKhhhj\hM ubj)}(hqueue_delayed_work_onh]j)}(hqueue_delayed_work_onh]hqueue_delayed_work_on}(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubah}(h]h ](jjeh"]h$]h&]jjuh1jhjKhhhj\hM ubj )}(hW(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hint cpuh](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hj͆hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjކhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjۆubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjqsbc.queue_delayed_work_onasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct delayed_work *dworkh](j~)}(hjh]hstruct}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj.ubj)}(h h]h }(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjPhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjRmodnameN classnameNjj)}j]jc.queue_delayed_work_onasbuh1hhj.ubj)}(h h]h }(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubj)}(hjah]h*}(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubj)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hunsigned long delayh](j4)}(hunsignedh]hunsigned}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj4)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hj̇hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hdelayh]hdelay}(hjڇhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjKhhhj\hM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjGhhhj\hM ubah}(h]jBah ](jjeh"]h$]h&]jj)jhuh1jqhj\hM hjDhhubj)}(hhh]h)}(h&queue work on specific CPU after delayh]h&queue work on specific CPU after delay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jhjDhhhj\hM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** We queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can't go away. Callers that fail to ensure this, may get **dwork->timer** queued to an offlined CPU and this will prevent queueing of **dwork->work** unless the offlined CPU becomes online again. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise. If **delay** is zero and **dwork** is idle, it will be scheduled for immediate execution.h](h)}(h**Parameters**h]j)}(hj&h]h Parameters}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj ubjS)}(hhh](jX)}(h*``int cpu`` CPU number to execute work on h](j^)}(h ``int cpu``h]j)}(hjEh]hint cpu}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj?ubjw)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjZhM hj[ubah}(h]h ]h"]h$]h&]uh1jvhj?ubeh}(h]h ]h"]h$]h&]uh1jWhjZhM hj<ubjX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj~h]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjxubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjxubeh}(h]h ]h"]h$]h&]uh1jWhjhM hj<ubjX)}(h-``struct delayed_work *dwork`` work to queue h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h work to queueh]h work to queue}(hjЈhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj̈hM hj͈ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj̈hM hj<ubjX)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](j^)}(h``unsigned long delay``h]j)}(hjh]hunsigned long delay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hj<ubeh}(h]h ]h"]h$]h&]uh1jRhj ubh)}(h**Description**h]j)}(hj+h]h Description}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj ubh)}(hX,We queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can't go away. Callers that fail to ensure this, may get **dwork->timer** queued to an offlined CPU and this will prevent queueing of **dwork->work** unless the offlined CPU becomes online again.h](hWe queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can’t go away. Callers that fail to ensure this, may get }(hjAhhhNhNubj)}(h**dwork->timer**h]h dwork->timer}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubh= queued to an offlined CPU and this will prevent queueing of }(hjAhhhNhNubj)}(h**dwork->work**h]h dwork->work}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubh. unless the offlined CPU becomes online again.}(hjAhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj ubh)}(h **Return**h]j)}(hjvh]hReturn}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj ubh)}(h``false`` if **work** was already on a queue, ``true`` otherwise. If **delay** is zero and **dwork** is idle, it will be scheduled for immediate execution.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was already on a queue, }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise. If }(hjhhhNhNubj)}(h **delay**h]hdelay}(hjƉhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is zero and }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hj؉hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh7 is idle, it will be scheduled for immediate execution.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh mod_delayed_work_on (C function)c.mod_delayed_work_onhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hpbool mod_delayed_work_on (int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]jx)}(hobool mod_delayed_work_on(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhjhM ubj)}(hmod_delayed_work_onh]j)}(hmod_delayed_work_onh]hmod_delayed_work_on}(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj hhhjhM ubj )}(hW(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hint cpuh](j4)}(hinth]hint}(hjMhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjIubj)}(h h]h }(hj[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubj)}(hcpuh]hcpu}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubj)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj~ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jj3sbc.mod_delayed_work_onasbuh1hhj~ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~ubj)}(hjah]h*}(hjΊhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~ubj)}(hwqh]hwq}(hjۊhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubj)}(hstruct delayed_work *dworkh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]jc.mod_delayed_work_onasbuh1hhjubj)}(h h]h }(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hdworkh]hdwork}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubj)}(hunsigned long delayh](j4)}(hunsignedh]hunsigned}(hjdhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj`ubj)}(h h]h }(hjrhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`ubj4)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj`ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`ubj)}(hdelayh]hdelay}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubeh}(h]h ]h"]h$]h&]jjuh1j hj hhhjhM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj hhhjhM ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM hjhhubj)}(hhh]h)}(h7modify delay of or queue a delayed work on specific CPUh]h7modify delay of or queue a delayed work on specific CPU}(hjƋhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjËhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjދjjދjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** If **dwork** is idle, equivalent to queue_delayed_work_on(); otherwise, modify **dwork**'s timer so that it expires after **delay**. If **delay** is zero, **work** is guaranteed to be scheduled immediately regardless of its current state. This function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details. **Return** ``false`` if **dwork** was idle and queued, ``true`` if **dwork** was pending and its timer was modified.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjS)}(hhh](jX)}(h*``int cpu`` CPU number to execute work on h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubjX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj@h]hstruct workqueue_struct *wq}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj:ubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjUhM hjVubah}(h]h ]h"]h$]h&]uh1jvhj:ubeh}(h]h ]h"]h$]h&]uh1jWhjUhM hjubjX)}(h-``struct delayed_work *dwork`` work to queue h](j^)}(h``struct delayed_work *dwork``h]j)}(hjyh]hstruct delayed_work *dwork}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjsubjw)}(hhh]h)}(h work to queueh]h work to queue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjsubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubjX)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](j^)}(h``unsigned long delay``h]j)}(hjh]hunsigned long delay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hjˌhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjnjhM hjȌubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjnjhM hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hIf **dwork** is idle, equivalent to queue_delayed_work_on(); otherwise, modify **dwork**'s timer so that it expires after **delay**. If **delay** is zero, **work** is guaranteed to be scheduled immediately regardless of its current state.h](hIf }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhC is idle, equivalent to queue_delayed_work_on(); otherwise, modify }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh$’s timer so that it expires after }(hjhhhNhNubj)}(h **delay**h]hdelay}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. If }(hjhhhNhNubj)}(h **delay**h]hdelay}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is zero, }(hjhhhNhNubj)}(h**work**h]hwork}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhK is guaranteed to be scheduled immediately regardless of its current state.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hlThis function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details.h]hlThis function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details.}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hj}h]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hi``false`` if **dwork** was idle and queued, ``true`` if **dwork** was pending and its timer was modified.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was idle and queued, }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }hjsbj)}(h **dwork**h]hdwork}(hj͍hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh( was pending and its timer was modified.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhqueue_rcu_work (C function)c.queue_rcu_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hIbool queue_rcu_work (struct workqueue_struct *wq, struct rcu_work *rwork)h]jx)}(hHbool queue_rcu_work(struct workqueue_struct *wq, struct rcu_work *rwork)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM5 ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM5 ubj)}(hqueue_rcu_workh]j)}(hqueue_rcu_workh]hqueue_rcu_work}(hj&hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM5 ubj )}(h5(struct workqueue_struct *wq, struct rcu_work *rwork)h](j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjBhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj>ubj)}(h h]h }(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjbmodnameN classnameNjj)}j]j)}jj(sbc.queue_rcu_workasbuh1hhj>ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj:ubj)}(hstruct rcu_work *rworkh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hrcu_workh]hrcu_work}(hjҎhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjώubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjԎmodnameN classnameNjj)}j]j|c.queue_rcu_workasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hrworkh]hrwork}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj:ubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM5 ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM5 ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM5 hjhhubj)}(hhh]h)}(h#queue work after a RCU grace periodh]h#queue work after a RCU grace period}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM5 hj2hhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM5 ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjMjjMjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue to use ``struct rcu_work *rwork`` work to queue **Return** ``false`` if **rwork** was already pending, ``true`` otherwise. Note that a full RCU grace period is guaranteed only after a ``true`` return. While **rwork** is guaranteed to be executed after a ``false`` return, the execution may happen before a full RCU grace period has passed.h](h)}(h**Parameters**h]j)}(hjWh]h Parameters}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM9 hjQubjS)}(hhh](jX)}(h1``struct workqueue_struct *wq`` workqueue to use h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjvh]hstruct workqueue_struct *wq}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6 hjpubjw)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM6 hjubah}(h]h ]h"]h$]h&]uh1jvhjpubeh}(h]h ]h"]h$]h&]uh1jWhjhM6 hjmubjX)}(h)``struct rcu_work *rwork`` work to queue h](j^)}(h``struct rcu_work *rwork``h]j)}(hjh]hstruct rcu_work *rwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM7 hjubjw)}(hhh]h)}(h work to queueh]h work to queue}(hjȏhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjďhM7 hjŏubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjďhM7 hjmubeh}(h]h ]h"]h$]h&]uh1jRhjQubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM9 hjQubh)}(hX``false`` if **rwork** was already pending, ``true`` otherwise. Note that a full RCU grace period is guaranteed only after a ``true`` return. While **rwork** is guaranteed to be executed after a ``false`` return, the execution may happen before a full RCU grace period has passed.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h **rwork**h]hrwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was already pending, }(hjhhhNhNubj)}(h``true``h]htrue}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhJ otherwise. Note that a full RCU grace period is guaranteed only after a }(hjhhhNhNubj)}(h``true``h]htrue}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh return. While }(hjhhhNhNubj)}(h **rwork**h]hrwork}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh& is guaranteed to be executed after a }(hjhhhNhNubj)}(h ``false``h]hfalse}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhL return, the execution may happen before a full RCU grace period has passed.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM9 hjQubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh"worker_attach_to_pool (C function)c.worker_attach_to_poolhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hLvoid worker_attach_to_pool (struct worker *worker, struct worker_pool *pool)h]jx)}(hKvoid worker_attach_to_pool(struct worker *worker, struct worker_pool *pool)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMi ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMi ubj)}(hworker_attach_to_poolh]j)}(hworker_attach_to_poolh]hworker_attach_to_pool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMi ubj )}(h1(struct worker *worker, struct worker_pool *pool)h](j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjԐhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjАubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjАubh)}(hhh]j)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.worker_attach_to_poolasbuh1hhjАubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjАubj)}(hjah]h*}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjАubj)}(hworkerh]hworker}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjАubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj̐ubj)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hjFhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjBubj)}(h h]h }(hjShhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjfmodnameN classnameNjj)}j]jc.worker_attach_to_poolasbuh1hhjBubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj̐ubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMi ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMi ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMi hjhhubj)}(hhh]h)}(hattach a worker to a poolh]hattach a worker to a pool}(hjǑhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMi hjđhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMi ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjߑjjߑjjjuh1jlhhhjJhNhNubj)}(hX(**Parameters** ``struct worker *worker`` worker to be attached ``struct worker_pool *pool`` the target pool **Description** Attach **worker** to **pool**. Once attached, the ``WORKER_UNBOUND`` flag and cpu-binding of **worker** are kept coordinated with the pool across cpu-[un]hotplugs.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMm hjubjS)}(hhh](jX)}(h0``struct worker *worker`` worker to be attached h](j^)}(h``struct worker *worker``h]j)}(hjh]hstruct worker *worker}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMj hjubjw)}(hhh]h)}(hworker to be attachedh]hworker to be attached}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMj hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMj hjubjX)}(h-``struct worker_pool *pool`` the target pool h](j^)}(h``struct worker_pool *pool``h]j)}(hjAh]hstruct worker_pool *pool}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMk hj;ubjw)}(hhh]h)}(hthe target poolh]hthe target pool}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjVhMk hjWubah}(h]h ]h"]h$]h&]uh1jvhj;ubeh}(h]h ]h"]h$]h&]uh1jWhjVhMk hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj|h]h Description}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjzubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMm hjubh)}(hAttach **worker** to **pool**. Once attached, the ``WORKER_UNBOUND`` flag and cpu-binding of **worker** are kept coordinated with the pool across cpu-[un]hotplugs.h](hAttach }(hjhhhNhNubj)}(h **worker**h]hworker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. Once attached, the }(hjhhhNhNubj)}(h``WORKER_UNBOUND``h]hWORKER_UNBOUND}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh flag and cpu-binding of }(hjhhhNhNubj)}(h **worker**h]hworker}(hjВhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh< are kept coordinated with the pool across cpu-[un]hotplugs.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMl hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh$worker_detach_from_pool (C function)c.worker_detach_from_poolhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h4void worker_detach_from_pool (struct worker *worker)h]jx)}(h3void worker_detach_from_pool(struct worker *worker)h](j4)}(hvoidh]hvoid}(hj hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM ubj)}(hworker_detach_from_poolh]j)}(hworker_detach_from_poolh]hworker_detach_from_pool}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM ubj )}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjFhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjBubj)}(h h]h }(hjShhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubh)}(hhh]j)}(hworkerh]hworker}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjfmodnameN classnameNjj)}j]j)}jj,sbc.worker_detach_from_poolasbuh1hhjBubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubj)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj>ubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM hjhhubj)}(hhh]h)}(hdetach a worker from its poolh]hdetach a worker from its pool}(hjɓhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjƓhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct worker *worker`` worker which is attached to its pool **Description** Undo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn't access to the pool after detached except it has other reference to the pool.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjS)}(hhh]jX)}(h?``struct worker *worker`` worker which is attached to its pool h](j^)}(h``struct worker *worker``h]j)}(hj h]hstruct worker *worker}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h$worker which is attached to its poolh]h$worker which is attached to its pool}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hj ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjEh]h Description}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hUndo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn't access to the pool after detached except it has other reference to the pool.h]hUndo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn’t access to the pool after detached except it has other reference to the pool.}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhcreate_worker (C function)c.create_workerhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h8struct worker * create_worker (struct worker_pool *pool)h]jx)}(h6struct worker *create_worker(struct worker_pool *pool)h](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM ubh)}(hhh]j)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}j create_workersbc.create_workerasbuh1hhjhhhjhM ubj)}(h h]h }(hjʔhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM ubj)}(hjah]h*}(hjؔhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM ubj)}(h create_workerh]j)}(hjǔh]h create_worker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM ubj )}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hj"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj$modnameN classnameNjj)}j]jŔc.create_workerasbuh1hhjubj)}(h h]h }(hj@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hpoolh]hpool}(hj[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM ubah}(h]j}ah ](jjeh"]h$]h&]jj)jhuh1jqhjhM hjhhubj)}(hhh]h)}(hcreate a new workqueue workerh]hcreate a new workqueue worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX **Parameters** ``struct worker_pool *pool`` pool the new worker will belong to **Description** Create and start a new worker which is attached to **pool**. **Context** Might sleep. Does GFP_KERNEL allocations. **Return** Pointer to the newly created worker.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjS)}(hhh]jX)}(h@``struct worker_pool *pool`` pool the new worker will belong to h](j^)}(h``struct worker_pool *pool``h]j)}(hjƕh]hstruct worker_pool *pool}(hjȕhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjĕubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(h"pool the new worker will belong toh]h"pool the new worker will belong to}(hjߕhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjەhM hjܕubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjەhM hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hidle_list and into list **Description** Tag **worker** for destruction and adjust **pool** stats accordingly. The worker should be idle. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM4 hjubjS)}(hhh](jX)}(h1``struct worker *worker`` worker to be destroyed h](j^)}(h``struct worker *worker``h]j)}(hjh]hstruct worker *worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM1 hjubjw)}(hhh]h)}(hworker to be destroyedh]hworker to be destroyed}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj,hM1 hj-ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj,hM1 hjubjX)}(hW``struct list_head *list`` transfer worker away from its pool->idle_list and into list h](j^)}(h``struct list_head *list``h]j)}(hjPh]hstruct list_head *list}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM2 hjJubjw)}(hhh]h)}(h;transfer worker away from its pool->idle_list and into listh]h;transfer worker away from its pool->idle_list and into list}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhjehM2 hjfubah}(h]h ]h"]h$]h&]uh1jvhjJubeh}(h]h ]h"]h$]h&]uh1jWhjehM2 hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM4 hjubh)}(haTag **worker** for destruction and adjust **pool** stats accordingly. The worker should be idle.h](hTag }(hjhhhNhNubj)}(h **worker**h]hworker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh for destruction and adjust }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh/ stats accordingly. The worker should be idle.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM3 hjubh)}(h **Context**h]j)}(hj֘h]hContext}(hjؘhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjԘubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6 hjubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM7 hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh idle_worker_timeout (C function)c.idle_worker_timeouthNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h/void idle_worker_timeout (struct timer_list *t)h]jx)}(h.void idle_worker_timeout(struct timer_list *t)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMS ubj)}(h h]h }(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj)hMS ubj)}(hidle_worker_timeouth]j)}(hidle_worker_timeouth]hidle_worker_timeout}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhj)hMS ubj )}(h(struct timer_list *t)h]j)}(hstruct timer_list *th](j~)}(hjh]hstruct}(hjXhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjTubj)}(h h]h }(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjTubh)}(hhh]j)}(h timer_listh]h timer_list}(hjvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjsubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjxmodnameN classnameNjj)}j]j)}jj>sbc.idle_worker_timeoutasbuh1hhjTubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjTubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjTubj)}(hth]ht}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjTubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjPubah}(h]h ]h"]h$]h&]jjuh1j hjhhhj)hMS ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhj)hMS ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj)hMS hjhhubj)}(hhh]h)}(h.check if some idle workers can now be deleted.h]h.check if some idle workers can now be deleted.}(hjۙhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMS hjؙhhubah}(h]h ]h"]h$]h&]uh1jhjhhhj)hMS ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct timer_list *t`` The pool's idle_timer that just expired **Description** The timer is armed in worker_enter_idle(). Note that it isn't disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMW hjubjS)}(hhh]jX)}(hA``struct timer_list *t`` The pool's idle_timer that just expired h](j^)}(h``struct timer_list *t``h]j)}(hjh]hstruct timer_list *t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMT hjubjw)}(hhh]h)}(h'The pool's idle_timer that just expiredh]h)The pool’s idle_timer that just expired}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj1hMT hj2ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj1hMT hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjWh]h Description}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMV hjubh)}(hXZThe timer is armed in worker_enter_idle(). Note that it isn't disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.h]hX\The timer is armed in worker_enter_idle(). Note that it isn’t disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMU hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhidle_cull_fn (C function)c.idle_cull_fnhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h,void idle_cull_fn (struct work_struct *work)h]jx)}(h+void idle_cull_fn(struct work_struct *work)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMy ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMy ubj)}(h idle_cull_fnh]j)}(h idle_cull_fnh]h idle_cull_fn}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMy ubj )}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjٚhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj՚ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj՚ubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.idle_cull_fnasbuh1hhj՚ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj՚ubj)}(hjah]h*}(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj՚ubj)}(hworkh]hwork}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj՚ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjњubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMy ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMy ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMy hjhhubj)}(hhh]h)}(h.cull workers that have been idle for too long.h]h.cull workers that have been idle for too long.}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMy hjYhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMy ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjtjjtjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct work_struct *work`` the pool's work for handling these idle workers **Description** This goes through a pool's idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds. We don't want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.h](h)}(h**Parameters**h]j)}(hj~h]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM} hjxubjS)}(hhh]jX)}(hM``struct work_struct *work`` the pool's work for handling these idle workers h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMz hjubjw)}(hhh]h)}(h/the pool's work for handling these idle workersh]h1the pool’s work for handling these idle workers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMz hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMz hjubah}(h]h ]h"]h$]h&]uh1jRhjxubh)}(h**Description**h]j)}(hj؛h]h Description}(hjڛhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj֛ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM| hjxubh)}(h{This goes through a pool's idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds.h]h}This goes through a pool’s idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM{ hjxubh)}(hWe don't want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.h]hWe don’t want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM~ hjxubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh maybe_create_worker (C function)c.maybe_create_workerhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h3void maybe_create_worker (struct worker_pool *pool)h]jx)}(h2void maybe_create_worker(struct worker_pool *pool)h](j4)}(hvoidh]hvoid}(hj,hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj(hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hj;hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(hhhj:hM ubj)}(hmaybe_create_workerh]j)}(hmaybe_create_workerh]hmaybe_create_worker}(hjMhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubah}(h]h ](jjeh"]h$]h&]jjuh1jhj(hhhj:hM ubj )}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjeubj)}(h h]h }(hjvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjOsbc.maybe_create_workerasbuh1hhjeubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubj)}(hpoolh]hpool}(hjœhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjaubah}(h]h ]h"]h$]h&]jjuh1j hj(hhhj:hM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj$hhhj:hM ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj:hM hj!hhubj)}(hhh]h)}(h create a new worker if necessaryh]h create a new worker if necessary}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jhj!hhhj:hM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct worker_pool *pool`` pool to create a new worker for **Description** Create a new worker for **pool** if necessary. **pool** is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on **pool** to resolve possible allocation deadlock. On return, need_to_create_worker() is guaranteed to be ``false`` and may_start_working() ``true``. LOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjS)}(hhh]jX)}(h=``struct worker_pool *pool`` pool to create a new worker for h](j^)}(h``struct worker_pool *pool``h]j)}(hj-h]hstruct worker_pool *pool}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj+ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj'ubjw)}(hhh]h)}(hpool to create a new worker forh]hpool to create a new worker for}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjBhM hjCubah}(h]h ]h"]h$]h&]uh1jvhj'ubeh}(h]h ]h"]h$]h&]uh1jWhjBhM hj$ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjhh]h Description}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hX+Create a new worker for **pool** if necessary. **pool** is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on **pool** to resolve possible allocation deadlock.h](hCreate a new worker for }(hj~hhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubh if necessary. }(hj~hhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubh is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on }(hj~hhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubh) to resolve possible allocation deadlock.}(hj~hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hbOn return, need_to_create_worker() is guaranteed to be ``false`` and may_start_working() ``true``.h](h7On return, need_to_create_worker() is guaranteed to be }(hjÝhhhNhNubj)}(h ``false``h]hfalse}(hj˝hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjÝubh and may_start_working() }(hjÝhhhNhNubj)}(h``true``h]htrue}(hjݝhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjÝubh.}(hjÝhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hLOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.h]hLOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhmanage_workers (C function)c.manage_workershNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h+bool manage_workers (struct worker *worker)h]jx)}(h*bool manage_workers(struct worker *worker)h](j4)}(hj&h]hbool}(hj%hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj!hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hj3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj!hhhj2hM ubj)}(hmanage_workersh]j)}(hmanage_workersh]hmanage_workers}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAubah}(h]h ](jjeh"]h$]h&]jjuh1jhj!hhhj2hM ubj )}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjahhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj]ubj)}(h h]h }(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubh)}(hhh]j)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjGsbc.manage_workersasbuh1hhj]ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubj)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjYubah}(h]h ]h"]h$]h&]jjuh1j hj!hhhj2hM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhj2hM ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj2hM hjhhubj)}(hhh]h)}(hmanage worker poolh]hmanage worker pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhj2hM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX)**Parameters** ``struct worker *worker`` self **Description** Assume the manager role and manage the worker pool **worker** belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function. The caller can safely start processing works on false return. On true return, it's guaranteed that need_to_create_worker() is false and may_start_working() is true. **Context** raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. **Return** ``false`` if the pool doesn't need management and the caller can safely start processing works, ``true`` if management function was performed and the conditions that the caller verified before calling the function may no longer be true.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjS)}(hhh]jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hj%h]hstruct worker *worker}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(hselfh]hself}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:hM hj;ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj:hM hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj`h]h Description}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hAssume the manager role and manage the worker pool **worker** belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function.h](h3Assume the manager role and manage the worker pool }(hjvhhhNhNubj)}(h **worker**h]hworker}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubh belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function.}(hjvhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hThe caller can safely start processing works on false return. On true return, it's guaranteed that need_to_create_worker() is false and may_start_working() is true.h]hThe caller can safely start processing works on false return. On true return, it’s guaranteed that need_to_create_worker() is false and may_start_working() is true.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(horaw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations.h]horaw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hjϟh]hReturn}(hjџhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj͟ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(h``false`` if the pool doesn't need management and the caller can safely start processing works, ``true`` if management function was performed and the conditions that the caller verified before calling the function may no longer be true.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhY if the pool doesn’t need management and the caller can safely start processing works, }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if management function was performed and the conditions that the caller verified before calling the function may no longer be true.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhprocess_one_work (C function)c.process_one_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hGvoid process_one_work (struct worker *worker, struct work_struct *work)h]jx)}(hFvoid process_one_work(struct worker *worker, struct work_struct *work)h](j4)}(hvoidh]hvoid}(hj4hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj0hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM2 ubj)}(h h]h }(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0hhhjBhM2 ubj)}(hprocess_one_workh]j)}(hprocess_one_workh]hprocess_one_work}(hjUhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubah}(h]h ](jjeh"]h$]h&]jjuh1jhj0hhhjBhM2 ubj )}(h1(struct worker *worker, struct work_struct *work)h](j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjmubj)}(h h]h }(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubh)}(hhh]j)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjWsbc.process_one_workasbuh1hhjmubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubj)}(hworkerh]hworker}(hjʠhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjiubj)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjߠubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjߠubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]jc.process_one_workasbuh1hhjߠubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjߠubj)}(hjah]h*}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjߠubj)}(hworkh]hwork}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjߠubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjiubeh}(h]h ]h"]h$]h&]jjuh1j hj0hhhjBhM2 ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj,hhhjBhM2 ubah}(h]j'ah ](jjeh"]h$]h&]jj)jhuh1jqhjBhM2 hj)hhubj)}(hhh]h)}(hprocess single workh]hprocess single work}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM2 hjahhubah}(h]h ]h"]h$]h&]uh1jhj)hhhjBhM2 ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj|jj|jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct worker *worker`` self ``struct work_struct *work`` work to process **Description** Process **work**. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work. **Context** raw_spin_lock_irq(pool->lock) which is released and regrabbed.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6 hjubjS)}(hhh](jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hjh]hstruct worker *worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM3 hjubjw)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM3 hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM3 hjubjX)}(h-``struct work_struct *work`` work to process h](j^)}(h``struct work_struct *work``h]j)}(hjޡh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjܡubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM4 hjءubjw)}(hhh]h)}(hwork to processh]hwork to process}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM4 hjubah}(h]h ]h"]h$]h&]uh1jvhjءubeh}(h]h ]h"]h$]h&]uh1jWhjhM4 hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6 hjubh)}(hX%Process **work**. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work.h](hProcess }(hj/hhhNhNubj)}(h**work**h]hwork}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubhX. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work.}(hj/hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM5 hjubh)}(h **Context**h]j)}(hjRh]hContext}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM; hjubh)}(h>raw_spin_lock_irq(pool->lock) which is released and regrabbed.h]h>raw_spin_lock_irq(pool->lock) which is released and regrabbed.}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM< hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh$process_scheduled_works (C function)c.process_scheduled_workshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h4void process_scheduled_works (struct worker *worker)h]jx)}(h3void process_scheduled_works(struct worker *worker)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM ubj)}(hprocess_scheduled_worksh]j)}(hprocess_scheduled_worksh]hprocess_scheduled_works}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM ubj )}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjԢhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjТubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjТubh)}(hhh]j)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.process_scheduled_worksasbuh1hhjТubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjТubj)}(hjah]h*}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjТubj)}(hworkerh]hworker}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjТubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj̢ubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM hjhhubj)}(hhh]h)}(hprocess scheduled worksh]hprocess scheduled works}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjThhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjojjojjjuh1jlhhhjJhNhNubj)}(hXQ**Parameters** ``struct worker *worker`` self **Description** Process all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it. **Context** raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.h](h)}(h**Parameters**h]j)}(hjyh]h Parameters}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjsubjS)}(hhh]jX)}(h``struct worker *worker`` self h](j^)}(h``struct worker *worker``h]j)}(hjh]hstruct worker *worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjw)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM hjubah}(h]h ]h"]h$]h&]uh1jRhjsubh)}(h**Description**h]j)}(hjӣh]h Description}(hjգhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjѣubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjsubh)}(hProcess all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it.h]hProcess all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjsubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjsubh)}(hQraw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.h]hQraw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjsubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhworker_thread (C function)c.worker_threadhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h"int worker_thread (void *__worker)h]jx)}(h!int worker_thread(void *__worker)h](j4)}(hinth]hint}(hj?hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj;hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj;hhhjMhM ubj)}(h worker_threadh]j)}(h worker_threadh]h worker_thread}(hj`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj;hhhjMhM ubj )}(h(void *__worker)h]j)}(hvoid *__workerh](j4)}(hvoidh]hvoid}(hj|hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjxubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubj)}(h__workerh]h__worker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjtubah}(h]h ]h"]h$]h&]jjuh1j hj;hhhjMhM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj7hhhjMhM ubah}(h]j2ah ](jjeh"]h$]h&]jj)jhuh1jqhjMhM hj4hhubj)}(hhh]h)}(hthe worker thread functionh]hthe worker thread function}(hjϤhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj̤hhubah}(h]h ]h"]h$]h&]uh1jhj4hhhjMhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``void *__worker`` self **Description** The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread(). **Return** 0h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubjS)}(hhh]jX)}(h``void *__worker`` self h](j^)}(h``void *__worker``h]j)}(hjh]hvoid *__worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj ubjw)}(hhh]h)}(hselfh]hself}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hM hj&ubah}(h]h ]h"]h$]h&]uh1jvhj ubeh}(h]h ]h"]h$]h&]uh1jWhj%hM hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjKh]h Description}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hX=The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread().h]hX=The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread().}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hjrh]hReturn}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(hjvh]h0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhrescuer_thread (C function)c.rescuer_threadhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h$int rescuer_thread (void *__rescuer)h]jx)}(h#int rescuer_thread(void *__rescuer)h](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM\ ubj)}(h h]h }(hjťhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjĥhM\ ubj)}(hrescuer_threadh]j)}(hrescuer_threadh]hrescuer_thread}(hjץhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjӥubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjĥhM\ ubj )}(h(void *__rescuer)h]j)}(hvoid *__rescuerh](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h __rescuerh]h __rescuer}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjĥhM\ ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjĥhM\ ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjĥhM\ hjhhubj)}(hhh]h)}(hthe rescuer thread functionh]hthe rescuer thread function}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM\ hjChhubah}(h]h ]h"]h$]h&]uh1jhjhhhjĥhM\ ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj^jj^jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``void *__rescuer`` self **Description** Workqueue rescuer thread function. There's one rescuer for each workqueue which has WQ_MEM_RECLAIM set. Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves. When such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed. This should happen rarely. **Return** 0h](h)}(h**Parameters**h]j)}(hjhh]h Parameters}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM` hjbubjS)}(hhh]jX)}(h``void *__rescuer`` self h](j^)}(h``void *__rescuer``h]j)}(hjh]hvoid *__rescuer}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM] hjubjw)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM] hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM] hj~ubah}(h]h ]h"]h$]h&]uh1jRhjbubh)}(h**Description**h]j)}(hj¦h]h Description}(hjĦhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM_ hjbubh)}(hhWorkqueue rescuer thread function. There's one rescuer for each workqueue which has WQ_MEM_RECLAIM set.h]hjWorkqueue rescuer thread function. There’s one rescuer for each workqueue which has WQ_MEM_RECLAIM set.}(hjئhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM^ hjbubh)}(hX(Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves.h]hX(Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMa hjbubh)}(hWhen such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed.h]hWhen such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMg hjbubh)}(hThis should happen rarely.h]hThis should happen rarely.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMk hjbubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMm hjbubh)}(hjvh]h0}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMn hjbubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh#check_flush_dependency (C function)c.check_flush_dependencyhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hsvoid check_flush_dependency (struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h]jx)}(hrvoid check_flush_dependency(struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h](j4)}(hvoidh]hvoid}(hjZhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjVhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMiubj)}(h h]h }(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjVhhhjhhMiubj)}(hcheck_flush_dependencyh]j)}(hcheck_flush_dependencyh]hcheck_flush_dependency}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjwubah}(h]h ](jjeh"]h$]h&]jjuh1jhjVhhhjhhMiubj )}(hW(struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h](j)}(h"struct workqueue_struct *target_wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jj}sbc.check_flush_dependencyasbuh1hhjubj)}(h h]h }(hjէhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h target_wqh]h target_wq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct work_struct *target_workh](j~)}(hjh]hstruct}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj$ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj)modnameN classnameNjj)}j]jѧc.check_flush_dependencyasbuh1hhjubj)}(h h]h }(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjShhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h target_workh]h target_work}(hj`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hbool from_cancelh](j4)}(hj&h]hbool}(hjyhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjuubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjuubj)}(h from_cancelh]h from_cancel}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjuubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjVhhhjhhMiubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjRhhhjhhMiubah}(h]jMah ](jjeh"]h$]h&]jj)jhuh1jqhjhhMihjOhhubj)}(hhh]h)}(h!check for flush dependency sanityh]h!check for flush dependency sanity}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMihjhhubah}(h]h ]h"]h$]h&]uh1jhjOhhhjhhMiubeh}(h]h ](jfunctioneh"]h$]h&]jjjj֨jj֨jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *target_wq`` workqueue being flushed ``struct work_struct *target_work`` work item being flushed (NULL for workqueue flushes) ``bool from_cancel`` are we called from the work cancel path **Description** ``current`` is trying to flush the whole **target_wq** or **target_work** on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if **target_wq** doesn't have ``WQ_MEM_RECLAIM`` and verify that ``current`` is not reclaiming memory or running on a workqueue which doesn't have ``WQ_MEM_RECLAIM`` as that can break forward- progress guarantee leading to a deadlock.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjިubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMmhjڨubjS)}(hhh](jX)}(h?``struct workqueue_struct *target_wq`` workqueue being flushed h](j^)}(h&``struct workqueue_struct *target_wq``h]j)}(hjh]h"struct workqueue_struct *target_wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMjhjubjw)}(hhh]h)}(hworkqueue being flushedh]hworkqueue being flushed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMjhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMjhjubjX)}(hY``struct work_struct *target_work`` work item being flushed (NULL for workqueue flushes) h](j^)}(h#``struct work_struct *target_work``h]j)}(hj8h]hstruct work_struct *target_work}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMkhj2ubjw)}(hhh]h)}(h4work item being flushed (NULL for workqueue flushes)h]h4work item being flushed (NULL for workqueue flushes)}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjMhMkhjNubah}(h]h ]h"]h$]h&]uh1jvhj2ubeh}(h]h ]h"]h$]h&]uh1jWhjMhMkhjubjX)}(h=``bool from_cancel`` are we called from the work cancel path h](j^)}(h``bool from_cancel``h]j)}(hjqh]hbool from_cancel}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMlhjkubjw)}(hhh]h)}(h'are we called from the work cancel pathh]h'are we called from the work cancel path}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMlhjubah}(h]h ]h"]h$]h&]uh1jvhjkubeh}(h]h ]h"]h$]h&]uh1jWhjhMlhjubeh}(h]h ]h"]h$]h&]uh1jRhjڨubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMnhjڨubh)}(hX``current`` is trying to flush the whole **target_wq** or **target_work** on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if **target_wq** doesn't have ``WQ_MEM_RECLAIM`` and verify that ``current`` is not reclaiming memory or running on a workqueue which doesn't have ``WQ_MEM_RECLAIM`` as that can break forward- progress guarantee leading to a deadlock.h](j)}(h ``current``h]hcurrent}(hjƩhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj©ubh is trying to flush the whole }(hj©hhhNhNubj)}(h **target_wq**h]h target_wq}(hjةhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj©ubh or }(hj©hhhNhNubj)}(h**target_work**h]h target_work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj©ubh on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if }(hj©hhhNhNubj)}(h **target_wq**h]h target_wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj©ubh doesn’t have }(hj©hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj©ubh and verify that }(hj©hhhNhNubj)}(h ``current``h]hcurrent}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj©ubhI is not reclaiming memory or running on a workqueue which doesn’t have }(hj©hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj©ubhE as that can break forward- progress guarantee leading to a deadlock.}(hj©hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMmhjڨubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhinsert_wq_barrier (C function)c.insert_wq_barrierhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hvoid insert_wq_barrier (struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h]jx)}(h~void insert_wq_barrier(struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h](j4)}(hvoidh]hvoid}(hjkhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjghhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjzhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjghhhjyhMubj)}(hinsert_wq_barrierh]j)}(hinsert_wq_barrierh]hinsert_wq_barrier}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjghhhjyhMubj )}(hh(struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h](j)}(hstruct pool_workqueue *pwqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjƪhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjêubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjȪmodnameN classnameNjj)}j]j)}jjsbc.insert_wq_barrierasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hpwqh]hpwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct wq_barrier *barrh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h wq_barrierh]h wq_barrier}(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj:modnameN classnameNjj)}j]jc.insert_wq_barrierasbuh1hhjubj)}(h h]h }(hjVhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hbarrh]hbarr}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct work_struct *targeth](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]jc.insert_wq_barrierasbuh1hhjubj)}(h h]h }(hjƫhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjԫhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(htargeth]htarget}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct worker *workerh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]jc.insert_wq_barrierasbuh1hhjubj)}(h h]h }(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hworkerh]hworker}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjghhhjyhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjchhhjyhMubah}(h]j^ah ](jjeh"]h$]h&]jj)jhuh1jqhjyhMhj`hhubj)}(hhh]h)}(hinsert a barrier workh]hinsert a barrier work}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjxhhubah}(h]h ]h"]h$]h&]uh1jhj`hhhjyhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct pool_workqueue *pwq`` pwq to insert barrier into ``struct wq_barrier *barr`` wq_barrier to insert ``struct work_struct *target`` target work to attach **barr** to ``struct worker *worker`` worker currently executing **target**, NULL if **target** is not executing **Description** **barr** is linked to **target** such that **barr** is completed only after **target** finishes execution. Please note that the ordering guarantee is observed only with respect to **target** and on the local cpu. Currently, a queued barrier can't be canceled. This is because try_to_grab_pending() can't determine whether the work to be grabbed is at the head of the queue and thus can't clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set. Note that when **worker** is non-NULL, **target** may be modified underneath us, so we can't reliably determine pwq from **target**. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h:``struct pool_workqueue *pwq`` pwq to insert barrier into h](j^)}(h``struct pool_workqueue *pwq``h]j)}(hjh]hstruct pool_workqueue *pwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hpwq to insert barrier intoh]hpwq to insert barrier into}(hjլhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjѬhMhjҬubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjѬhMhjubjX)}(h1``struct wq_barrier *barr`` wq_barrier to insert h](j^)}(h``struct wq_barrier *barr``h]j)}(hjh]hstruct wq_barrier *barr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hwq_barrier to inserth]hwq_barrier to insert}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhj ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj hMhjubjX)}(hA``struct work_struct *target`` target work to attach **barr** to h](j^)}(h``struct work_struct *target``h]j)}(hj.h]hstruct work_struct *target}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj(ubjw)}(hhh]h)}(h!target work to attach **barr** toh](htarget work to attach }(hjGhhhNhNubj)}(h**barr**h]hbarr}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjGubh to}(hjGhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjChMhjDubah}(h]h ]h"]h$]h&]uh1jvhj(ubeh}(h]h ]h"]h$]h&]uh1jWhjChMhjubjX)}(he``struct worker *worker`` worker currently executing **target**, NULL if **target** is not executing h](j^)}(h``struct worker *worker``h]j)}(hjyh]hstruct worker *worker}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjsubjw)}(hhh]h)}(hJworker currently executing **target**, NULL if **target** is not executingh](hworker currently executing }(hjhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh , NULL if }(hjhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is not executing}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjsubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjحh]h Description}(hjڭhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj֭ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h**barr** is linked to **target** such that **barr** is completed only after **target** finishes execution. Please note that the ordering guarantee is observed only with respect to **target** and on the local cpu.h](j)}(h**barr**h]hbarr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is linked to }(hjhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh such that }(hjhhhNhNubj)}(h**barr**h]hbarr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is completed only after }(hjhhhNhNubj)}(h **target**h]htarget}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh_ finishes execution. Please note that the ordering guarantee is observed only with respect to }(hjhhhNhNubj)}(h **target**h]htarget}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and on the local cpu.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hXCurrently, a queued barrier can't be canceled. This is because try_to_grab_pending() can't determine whether the work to be grabbed is at the head of the queue and thus can't clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set.h]hX%Currently, a queued barrier can’t be canceled. This is because try_to_grab_pending() can’t determine whether the work to be grabbed is at the head of the queue and thus can’t clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set.}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hNote that when **worker** is non-NULL, **target** may be modified underneath us, so we can't reliably determine pwq from **target**.h](hNote that when }(hjbhhhNhNubj)}(h **worker**h]hworker}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubh is non-NULL, }(hjbhhhNhNubj)}(h **target**h]htarget}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubhJ may be modified underneath us, so we can’t reliably determine pwq from }(hjbhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubh.}(hjbhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh&flush_workqueue_prep_pwqs (C function)c.flush_workqueue_prep_pwqshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h]bool flush_workqueue_prep_pwqs (struct workqueue_struct *wq, int flush_color, int work_color)h]jx)}(h\bool flush_workqueue_prep_pwqs(struct workqueue_struct *wq, int flush_color, int work_color)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hflush_workqueue_prep_pwqsh]j)}(hflush_workqueue_prep_pwqsh]hflush_workqueue_prep_pwqs}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h>(struct workqueue_struct *wq, int flush_color, int work_color)h](j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj&ubj)}(h h]h }(hj7hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjJmodnameN classnameNjj)}j]j)}jjsbc.flush_workqueue_prep_pwqsasbuh1hhj&ubj)}(h h]h }(hjhhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&ubj)}(hjah]h*}(hjvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&ubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj"ubj)}(hint flush_colorh](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h flush_colorh]h flush_color}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj"ubj)}(hint work_colorh](j4)}(hinth]hint}(hjѯhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjͯubj)}(h h]h }(hj߯hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjͯubj)}(h work_colorh]h work_color}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjͯubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj"ubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h#prepare pwqs for workqueue flushingh]h#prepare pwqs for workqueue flushing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj/jj/jjjuh1jlhhhjJhNhNubj)}(hXa**Parameters** ``struct workqueue_struct *wq`` workqueue being flushed ``int flush_color`` new flush color, < 0 for no-op ``int work_color`` new work color, < 0 for no-op **Description** Prepare pwqs for workqueue flushing. If **flush_color** is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color's stay at -1 and ``false`` is returned. If any pwq has in flight commands, its pwq->flush_color is set to **flush_color**, **wq->nr_pwqs_to_flush** is updated accordingly, pwq wakeup logic is armed and ``true`` is returned. The caller should have initialized **wq->first_flusher** prior to calling this function with non-negative **flush_color**. If **flush_color** is negative, no flush color update is done and ``false`` is returned. If **work_color** is non-negative, all pwqs should have the same work_color which is previous to **work_color** and all will be advanced to **work_color**. **Context** mutex_lock(wq->mutex). **Return** ``true`` if **flush_color** >= 0 and there's something to flush. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hj9h]h Parameters}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubjS)}(hhh](jX)}(h8``struct workqueue_struct *wq`` workqueue being flushed h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjXh]hstruct workqueue_struct *wq}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjRubjw)}(hhh]h)}(hworkqueue being flushedh]hworkqueue being flushed}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjmhMhjnubah}(h]h ]h"]h$]h&]uh1jvhjRubeh}(h]h ]h"]h$]h&]uh1jWhjmhMhjOubjX)}(h3``int flush_color`` new flush color, < 0 for no-op h](j^)}(h``int flush_color``h]j)}(hjh]hint flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hnew flush color, < 0 for no-oph]hnew flush color, < 0 for no-op}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjOubjX)}(h1``int work_color`` new work color, < 0 for no-op h](j^)}(h``int work_color``h]j)}(hjʰh]hint work_color}(hj̰hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjȰubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjİubjw)}(hhh]h)}(hnew work color, < 0 for no-oph]hnew work color, < 0 for no-op}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj߰hMhjubah}(h]h ]h"]h$]h&]uh1jvhjİubeh}(h]h ]h"]h$]h&]uh1jWhj߰hMhjOubeh}(h]h ]h"]h$]h&]uh1jRhj3ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubh)}(h$Prepare pwqs for workqueue flushing.h]h$Prepare pwqs for workqueue flushing.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubh)}(hXyIf **flush_color** is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color's stay at -1 and ``false`` is returned. If any pwq has in flight commands, its pwq->flush_color is set to **flush_color**, **wq->nr_pwqs_to_flush** is updated accordingly, pwq wakeup logic is armed and ``true`` is returned.h](hIf }(hj*hhhNhNubj)}(h**flush_color**h]h flush_color}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color’s stay at -1 and }(hj*hhhNhNubj)}(h ``false``h]hfalse}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubhQ is returned. If any pwq has in flight commands, its pwq->flush_color is set to }(hj*hhhNhNubj)}(h**flush_color**h]h flush_color}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh, }(hj*hhhNhNubj)}(h**wq->nr_pwqs_to_flush**h]hwq->nr_pwqs_to_flush}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh7 is updated accordingly, pwq wakeup logic is armed and }(hj*hhhNhNubj)}(h``true``h]htrue}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh is returned.}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubh)}(hThe caller should have initialized **wq->first_flusher** prior to calling this function with non-negative **flush_color**. If **flush_color** is negative, no flush color update is done and ``false`` is returned.h](h#The caller should have initialized }(hjhhhNhNubj)}(h**wq->first_flusher**h]hwq->first_flusher}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh2 prior to calling this function with non-negative }(hjhhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. If }(hjhhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh0 is negative, no flush color update is done and }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjѱhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is returned.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubh)}(hIf **work_color** is non-negative, all pwqs should have the same work_color which is previous to **work_color** and all will be advanced to **work_color**.h](hIf }(hjhhhNhNubj)}(h**work_color**h]h work_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhP is non-negative, all pwqs should have the same work_color which is previous to }(hjhhhNhNubj)}(h**work_color**h]h work_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and all will be advanced to }(hjhhhNhNubj)}(h**work_color**h]h work_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubh)}(h **Context**h]j)}(hj1h]hContext}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubh)}(hmutex_lock(wq->mutex).h]hmutex_lock(wq->mutex).}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubh)}(h **Return**h]j)}(hjXh]hReturn}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjVubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubh)}(hV``true`` if **flush_color** >= 0 and there's something to flush. ``false`` otherwise.h](j)}(h``true``h]htrue}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubh if }(hjnhhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubh) >= 0 and there’s something to flush. }(hjnhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubh otherwise.}(hjnhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh__flush_workqueue (C function)c.__flush_workqueuehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h4void __flush_workqueue (struct workqueue_struct *wq)h]jx)}(h3void __flush_workqueue(struct workqueue_struct *wq)h](j4)}(hvoidh]hvoid}(hjϲhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj˲hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM[ubj)}(h h]h }(hj޲hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj˲hhhjݲhM[ubj)}(h__flush_workqueueh]j)}(h__flush_workqueueh]h__flush_workqueue}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhj˲hhhjݲhM[ubj )}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj'ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj,modnameN classnameNjj)}j]j)}jjsbc.__flush_workqueueasbuh1hhjubj)}(h h]h }(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjXhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hwqh]hwq}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hj˲hhhjݲhM[ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjDzhhhjݲhM[ubah}(h]j²ah ](jjeh"]h$]h&]jj)jhuh1jqhjݲhM[hjIJhhubj)}(hhh]h)}(h5ensure that any scheduled work has run to completion.h]h5ensure that any scheduled work has run to completion.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM[hjhhubah}(h]h ]h"]h$]h&]uh1jhjIJhhhjݲhM[ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct workqueue_struct *wq`` workqueue to flush **Description** This function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM_hjubjS)}(hhh]jX)}(h3``struct workqueue_struct *wq`` workqueue to flush h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjгh]hstruct workqueue_struct *wq}(hjҳhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjγubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM\hjʳubjw)}(hhh]h)}(hworkqueue to flushh]hworkqueue to flush}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM\hjubah}(h]h ]h"]h$]h&]uh1jvhjʳubeh}(h]h ]h"]h$]h&]uh1jWhjhM\hjdzubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj h]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM^hjubh)}(hThis function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.h]hThis function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM]hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhdrain_workqueue (C function)c.drain_workqueuehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h2void drain_workqueue (struct workqueue_struct *wq)h]jx)}(h1void drain_workqueue(struct workqueue_struct *wq)h](j4)}(hvoidh]hvoid}(hjPhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjLhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj_hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjLhhhj^hMubj)}(hdrain_workqueueh]j)}(hdrain_workqueueh]hdrain_workqueue}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmubah}(h]h ](jjeh"]h$]h&]jjuh1jhjLhhhj^hMubj )}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjssbc.drain_workqueueasbuh1hhjubj)}(h h]h }(hj˴hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjٴhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjLhhhj^hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjHhhhj^hMubah}(h]jCah ](jjeh"]h$]h&]jj)jhuh1jqhj^hMhjEhhubj)}(hhh]h)}(hdrain a workqueueh]hdrain a workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj hhubah}(h]h ]h"]h$]h&]uh1jhjEhhhj^hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj(jj(jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue to drain **Description** Wait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on **wq** can queue further work items on it. **wq** is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.h](h)}(h**Parameters**h]j)}(hj2h]h Parameters}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj,ubjS)}(hhh]jX)}(h3``struct workqueue_struct *wq`` workqueue to drain h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjQh]hstruct workqueue_struct *wq}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjKubjw)}(hhh]h)}(hworkqueue to drainh]hworkqueue to drain}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjfhMhjgubah}(h]h ]h"]h$]h&]uh1jvhjKubeh}(h]h ]h"]h$]h&]uh1jWhjfhMhjHubah}(h]h ]h"]h$]h&]uh1jRhj,ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj,ubh)}(hXzWait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on **wq** can queue further work items on it. **wq** is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.h](hWait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh& can queue further work items on it. }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj,ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhflush_work (C function) c.flush_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h*bool flush_work (struct work_struct *work)h]jx)}(h)bool flush_work(struct work_struct *work)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(h flush_workh]j)}(h flush_workh]h flush_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj-ubj)}(h h]h }(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubh)}(hhh]j)}(h work_structh]h work_struct}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjQmodnameN classnameNjj)}j]j)}jjsb c.flush_workasbuh1hhj-ubj)}(h h]h }(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubj)}(hjah]h*}(hj}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj)ubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h>wait for a work to finish executing the last queueing instanceh]h>wait for a work to finish executing the last queueing instance}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj̶jj̶jjjuh1jlhhhjJhNhNubj)}(hXL**Parameters** ``struct work_struct *work`` the work to flush **Description** Wait until **work** has finished execution. **work** is guaranteed to be idle on return if it hasn't been requeued since flush started. **Return** ``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hjֶh]h Parameters}(hjضhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjԶubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjжubjS)}(hhh]jX)}(h/``struct work_struct *work`` the work to flush h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe work to flushh]hthe work to flush}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhj ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj hMhjubah}(h]h ]h"]h$]h&]uh1jRhjжubh)}(h**Description**h]j)}(hj0h]h Description}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjжubh)}(hWait until **work** has finished execution. **work** is guaranteed to be idle on return if it hasn't been requeued since flush started.h](h Wait until }(hjFhhhNhNubj)}(h**work**h]hwork}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh has finished execution. }(hjFhhhNhNubj)}(h**work**h]hwork}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubhU is guaranteed to be idle on return if it hasn’t been requeued since flush started.}(hjFhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjжubh)}(h **Return**h]j)}(hj{h]hReturn}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjжubh)}(hc``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh: if flush_work() waited for the work to finish execution, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if it was already idle.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjжubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhflush_delayed_work (C function)c.flush_delayed_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h4bool flush_delayed_work (struct delayed_work *dwork)h]jx)}(h3bool flush_delayed_work(struct delayed_work *dwork)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjܷhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjܷhhhjhMubj)}(hflush_delayed_workh]j)}(hflush_delayed_workh]hflush_delayed_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjܷhhhjhMubj )}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hj)hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj<modnameN classnameNjj)}j]j)}jjsbc.flush_delayed_workasbuh1hhjubj)}(h h]h }(hjZhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hdworkh]hdwork}(hjuhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjܷhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjطhhhjhMubah}(h]jӷah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjշhhubj)}(hhh]h)}(h6wait for a dwork to finish executing the last queueingh]h6wait for a dwork to finish executing the last queueing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjշhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hXz**Parameters** ``struct delayed_work *dwork`` the delayed work to flush **Description** Delayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of **dwork**. **Return** ``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjøhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h9``struct delayed_work *dwork`` the delayed work to flush h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj޸ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjڸubjw)}(hhh]h)}(hthe delayed work to flushh]hthe delayed work to flush}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjڸubeh}(h]h ]h"]h$]h&]uh1jWhjhMhj׸ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hDelayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of **dwork**.h](hDelayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of }(hj1hhhNhNubj)}(h **dwork**h]hdwork}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubh.}(hj1hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjTh]hReturn}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hc``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubh: if flush_work() waited for the work to finish execution, }(hjjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubh if it was already idle.}(hjjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhflush_rcu_work (C function)c.flush_rcu_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h,bool flush_rcu_work (struct rcu_work *rwork)h]jx)}(h+bool flush_rcu_work(struct rcu_work *rwork)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjǹhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjƹhMubj)}(hflush_rcu_workh]j)}(hflush_rcu_workh]hflush_rcu_work}(hjٹhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjչubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjƹhMubj )}(h(struct rcu_work *rwork)h]j)}(hstruct rcu_work *rworkh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hrcu_workh]hrcu_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jj۹sbc.flush_rcu_workasbuh1hhjubj)}(h h]h }(hj3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hrworkh]hrwork}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjƹhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjƹhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjƹhMhjhhubj)}(hhh]h)}(h6wait for a rwork to finish executing the last queueingh]h6wait for a rwork to finish executing the last queueing}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjuhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjƹhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct rcu_work *rwork`` the rcu work to flush **Return** ``true`` if flush_rcu_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h1``struct rcu_work *rwork`` the rcu work to flush h](j^)}(h``struct rcu_work *rwork``h]j)}(hjh]hstruct rcu_work *rwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe rcu work to flushh]hthe rcu work to flush}(hjҺhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjκhMhjϺubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjκhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hg``true`` if flush_rcu_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh> if flush_rcu_work() waited for the work to finish execution, }(hj hhhNhNubj)}(h ``false``h]hfalse}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh if it was already idle.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhcancel_work_sync (C function)c.cancel_work_synchNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h0bool cancel_work_sync (struct work_struct *work)h]jx)}(h/bool cancel_work_sync(struct work_struct *work)h](j4)}(hj&h]hbool}(hjYhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjUhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUhhhjfhMubj)}(hcancel_work_synch]j)}(hcancel_work_synch]hcancel_work_sync}(hjyhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjuubah}(h]h ](jjeh"]h$]h&]jjuh1jhjUhhhjfhMubj )}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jj{sbc.cancel_work_syncasbuh1hhjubj)}(h h]h }(hjӻhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjUhhhjfhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjQhhhjfhMubah}(h]jLah ](jjeh"]h$]h&]jj)jhuh1jqhjfhMhjNhhubj)}(hhh]h)}(h'cancel a work and wait for it to finishh]h'cancel a work and wait for it to finish}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjNhhhjfhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj0jj0jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct work_struct *work`` the work to cancel **Description** Cancel **work** and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, **work** is guaranteed to be not pending or executing on any CPU as long as there aren't racing enqueues. cancel_work_sync(:c:type:`delayed_work->work `) must not be used for delayed_work's. Use cancel_delayed_work_sync() instead. Must be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hj:h]h Parameters}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM"hj4ubjS)}(hhh]jX)}(h0``struct work_struct *work`` the work to cancel h](j^)}(h``struct work_struct *work``h]j)}(hjYh]hstruct work_struct *work}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjWubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjSubjw)}(hhh]h)}(hthe work to cancelh]hthe work to cancel}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjnhMhjoubah}(h]h ]h"]h$]h&]uh1jvhjSubeh}(h]h ]h"]h$]h&]uh1jWhjnhMhjPubah}(h]h ]h"]h$]h&]uh1jRhj4ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM!hj4ubh)}(hXCancel **work** and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, **work** is guaranteed to be not pending or executing on any CPU as long as there aren't racing enqueues.h](hCancel }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, }(hjhhhNhNubj)}(h**work**h]hwork}(hjļhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhc is guaranteed to be not pending or executing on any CPU as long as there aren’t racing enqueues.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj4ubh)}(hcancel_work_sync(:c:type:`delayed_work->work `) must not be used for delayed_work's. Use cancel_delayed_work_sync() instead.h](hcancel_work_sync(}(hjݼhhhNhNubh)}(h+:c:type:`delayed_work->work `h]j)}(hjh]hdelayed_work->work}(hjhhhNhNubah}(h]h ](xrefjc-typeeh"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]refdoccore-api/workqueue refdomainjreftypetype refexplicitrefwarnjj)}j]sb reftarget delayed_workuh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM%hjݼubhP) must not be used for delayed_work’s. Use cancel_delayed_work_sync() instead.}(hjݼhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj hM%hj4ubh)}(hMust be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue.h](h+Must be called from a sleepable context if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhl was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if }(hjhhhNhNubj)}(h**work**h]hwork}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh# was last queued on a BH workqueue.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM(hj4ubh)}(h>Returns ``true`` if **work** was pending, ``false`` otherwise.h](hReturns }(hjHhhhNhNubj)}(h``true``h]htrue}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubh if }(hjHhhhNhNubj)}(h**work**h]hwork}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubh was pending, }(hjHhhhNhNubj)}(h ``false``h]hfalse}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubh otherwise.}(hjHhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM,hj4ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh cancel_delayed_work (C function)c.cancel_delayed_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h5bool cancel_delayed_work (struct delayed_work *dwork)h]jx)}(h4bool cancel_delayed_work(struct delayed_work *dwork)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM6ubj)}(hcancel_delayed_workh]j)}(hcancel_delayed_workh]hcancel_delayed_work}(hjͽhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjɽubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM6ubj )}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj modnameN classnameNjj)}j]j)}jjϽsbc.cancel_delayed_workasbuh1hhjubj)}(h h]h }(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hdworkh]hdwork}(hjBhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM6ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM6ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM6hjhhubj)}(hhh]h)}(hcancel a delayed workh]hcancel a delayed work}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6hjihhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM6ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct delayed_work *dwork`` delayed_work to cancel **Description** Kill off a pending delayed_work. **Return** ``true`` if **dwork** was pending and canceled; ``false`` if it wasn't pending. **Note** The work callback function may still be running on return, unless it returns ``true`` and the work doesn't re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it. This function is safe to call from any context including IRQ handler.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM:hjubjS)}(hhh]jX)}(h6``struct delayed_work *dwork`` delayed_work to cancel h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM7hjubjw)}(hhh]h)}(hdelayed_work to cancelh]hdelayed_work to cancel}(hjƾhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj¾hM7hjþubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj¾hM7hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM9hjubh)}(h Kill off a pending delayed_work.h]h Kill off a pending delayed_work.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM8hjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM:hjubh)}(hO``true`` if **dwork** was pending and canceled; ``false`` if it wasn't pending.h](j)}(h``true``h]htrue}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%ubh if }(hj%hhhNhNubj)}(h **dwork**h]hdwork}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%ubh was pending and canceled; }(hj%hhhNhNubj)}(h ``false``h]hfalse}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%ubh if it wasn’t pending.}(hj%hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM;hjubh)}(h**Note**h]j)}(hjhh]hNote}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM>hjubh)}(hThe work callback function may still be running on return, unless it returns ``true`` and the work doesn't re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.h](hMThe work callback function may still be running on return, unless it returns }(hj~hhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubhi and the work doesn’t re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.}(hj~hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM>hjubh)}(hEThis function is safe to call from any context including IRQ handler.h]hEThis function is safe to call from any context including IRQ handler.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMBhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh%cancel_delayed_work_sync (C function)c.cancel_delayed_work_synchNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h:bool cancel_delayed_work_sync (struct delayed_work *dwork)h]jx)}(h9bool cancel_delayed_work_sync(struct delayed_work *dwork)h](j4)}(hj&h]hbool}(hjοhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjʿhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMLubj)}(h h]h }(hjܿhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjʿhhhjۿhMLubj)}(hcancel_delayed_work_synch]j)}(hcancel_delayed_work_synch]hcancel_delayed_work_sync}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjʿhhhjۿhMLubj )}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j~)}(hjh]hstruct}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hj(hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj*modnameN classnameNjj)}j]j)}jjsbc.cancel_delayed_work_syncasbuh1hhjubj)}(h h]h }(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjVhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hdworkh]hdwork}(hjchhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjʿhhhjۿhMLubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjƿhhhjۿhMLubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjۿhMLhjÿhhubj)}(hhh]h)}(h/cancel a delayed work and wait for it to finishh]h/cancel a delayed work and wait for it to finish}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMLhjhhubah}(h]h ]h"]h$]h&]uh1jhjÿhhhjۿhMLubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` the delayed work cancel **Description** This is cancel_work_sync() for delayed works. **Return** ``true`` if **dwork** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMPhjubjS)}(hhh]jX)}(h7``struct delayed_work *dwork`` the delayed work cancel h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMMhjubjw)}(hhh]h)}(hthe delayed work cancelh]hthe delayed work cancel}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj h]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMOhjubh)}(h-This is cancel_work_sync() for delayed works.h]h-This is cancel_work_sync() for delayed works.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMNhjubh)}(h **Return**h]j)}(hj0h]hReturn}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMPhjubh)}(h7``true`` if **dwork** was pending, ``false`` otherwise.h](j)}(h``true``h]htrue}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh if }(hjFhhhNhNubj)}(h **dwork**h]hdwork}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh was pending, }(hjFhhhNhNubj)}(h ``false``h]hfalse}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh otherwise.}(hjFhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMQhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhdisable_work (C function)c.disable_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h,bool disable_work (struct work_struct *work)h]jx)}(h+bool disable_work(struct work_struct *work)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM[ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM[ubj)}(h disable_workh]j)}(h disable_workh]h disable_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM[ubj )}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.disable_workasbuh1hhjubj)}(h h]h }(hj!hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hj/hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hworkh]hwork}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM[ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM[ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM[hjhhubj)}(hhh]h)}(hDisable and cancel a work itemh]hDisable and cancel a work item}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM[hjchhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM[ubeh}(h]h ](jfunctioneh"]h$]h&]jjjj~jj~jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work item to disable **Description** Disable **work** by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue **work** will fail and return ``false``. The maximum supported disable depth is 2 to the power of ``WORK_OFFQ_DISABLE_BITS``, currently 65536. Can be called from any context. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM_hjubjS)}(hhh]jX)}(h2``struct work_struct *work`` work item to disable h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM\hjubjw)}(hhh]h)}(hwork item to disableh]hwork item to disable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM\hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM\hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM^hjubh)}(hX$Disable **work** by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue **work** will fail and return ``false``. The maximum supported disable depth is 2 to the power of ``WORK_OFFQ_DISABLE_BITS``, currently 65536.h](hDisable }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh will fail and return }(hjhhhNhNubj)}(h ``false``h]hfalse}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh;. The maximum supported disable depth is 2 to the power of }(hjhhhNhNubj)}(h``WORK_OFFQ_DISABLE_BITS``h]hWORK_OFFQ_DISABLE_BITS}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, currently 65536.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM]hjubh)}(h^Can be called from any context. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h(Can be called from any context. Returns }(hjOhhhNhNubj)}(h``true``h]htrue}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh if }(hjOhhhNhNubj)}(h**work**h]hwork}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh was pending, }(hjOhhhNhNubj)}(h ``false``h]hfalse}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh otherwise.}(hjOhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMbhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhdisable_work_sync (C function)c.disable_work_synchNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h1bool disable_work_sync (struct work_struct *work)h]jx)}(h0bool disable_work_sync(struct work_struct *work)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMmubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMmubj)}(hdisable_work_synch]j)}(hdisable_work_synch]hdisable_work_sync}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMmubj )}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.disable_work_syncasbuh1hhjubj)}(h h]h }(hj.hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hworkh]hwork}(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMmubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMmubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMmhjhhubj)}(hhh]h)}(h%Disable, cancel and drain a work itemh]h%Disable, cancel and drain a work item}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMmhjphhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMmubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct work_struct *work`` work item to disable **Description** Similar to disable_work() but also wait for **work** to finish if currently executing. Must be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMqhjubjS)}(hhh]jX)}(h2``struct work_struct *work`` work item to disable h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMnhjubjw)}(hhh]h)}(hwork item to disableh]hwork item to disable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMnhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMnhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMphjubh)}(hVSimilar to disable_work() but also wait for **work** to finish if currently executing.h](h,Similar to disable_work() but also wait for }(hjhhhNhNubj)}(h**work**h]hwork}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh" to finish if currently executing.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMohjubh)}(hMust be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue.h](h+Must be called from a sleepable context if }(hj&hhhNhNubj)}(h**work**h]hwork}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubhl was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if }(hj&hhhNhNubj)}(h**work**h]hwork}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubh# was last queued on a BH workqueue.}(hj&hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMrhjubh)}(h>Returns ``true`` if **work** was pending, ``false`` otherwise.h](hReturns }(hjYhhhNhNubj)}(h``true``h]htrue}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh if }(hjYhhhNhNubj)}(h**work**h]hwork}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh was pending, }(hjYhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh otherwise.}(hjYhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMvhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhenable_work (C function) c.enable_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h+bool enable_work (struct work_struct *work)h]jx)}(h*bool enable_work(struct work_struct *work)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(h enable_workh]j)}(h enable_workh]h enable_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsb c.enable_workasbuh1hhjubj)}(h h]h }(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjFhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hworkh]hwork}(hjShhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(hEnable a work itemh]hEnable a work item}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjzhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX8**Parameters** ``struct work_struct *work`` work item to enable **Description** Undo disable_work[_sync]() by decrementing **work**'s disable count. **work** can only be queued if its disable count is 0. Can be called from any context. Returns ``true`` if the disable count reached 0. Otherwise, ``false``.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h1``struct work_struct *work`` work item to enable h](j^)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hwork item to enableh]hwork item to enable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h{Undo disable_work[_sync]() by decrementing **work**'s disable count. **work** can only be queued if its disable count is 0.h](h+Undo disable_work[_sync]() by decrementing }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh’s disable count. }(hjhhhNhNubj)}(h**work**h]hwork}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. can only be queued if its disable count is 0.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hfCan be called from any context. Returns ``true`` if the disable count reached 0. Otherwise, ``false``.h](h(Can be called from any context. Returns }(hjBhhhNhNubj)}(h``true``h]htrue}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubh, if the disable count reached 0. Otherwise, }(hjBhhhNhNubj)}(h ``false``h]hfalse}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubh.}(hjBhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh!disable_delayed_work (C function)c.disable_delayed_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h6bool disable_delayed_work (struct delayed_work *dwork)h]jx)}(h5bool disable_delayed_work(struct delayed_work *dwork)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hdisable_delayed_workh]j)}(hdisable_delayed_workh]hdisable_delayed_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.disable_delayed_workasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hdworkh]hdwork}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h&Disable and cancel a delayed work itemh]h&Disable and cancel a delayed work item}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjQhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjljjljjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to disable **Description** disable_work() for delayed work items.h](h)}(h**Parameters**h]j)}(hjvh]h Parameters}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjpubjS)}(hhh]jX)}(h<``struct delayed_work *dwork`` delayed work item to disable h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hdelayed work item to disableh]hdelayed work item to disable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjpubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjpubh)}(h&disable_work() for delayed work items.h]h&disable_work() for delayed work items.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjpubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh&disable_delayed_work_sync (C function)c.disable_delayed_work_synchNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h;bool disable_delayed_work_sync (struct delayed_work *dwork)h]jx)}(h:bool disable_delayed_work_sync(struct delayed_work *dwork)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj#hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj"hMubj)}(hdisable_delayed_work_synch]j)}(hdisable_delayed_work_synch]hdisable_delayed_work_sync}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhj"hMubj )}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j~)}(hjh]hstruct}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjMubj)}(h h]h }(hj^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjqmodnameN classnameNjj)}j]j)}jj7sbc.disable_delayed_work_syncasbuh1hhjMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMubj)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjIubah}(h]h ]h"]h$]h&]jjuh1j hjhhhj"hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj hhhj"hMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj"hMhj hhubj)}(hhh]h)}(h-Disable, cancel and drain a delayed work itemh]h-Disable, cancel and drain a delayed work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhj hhhj"hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to disable **Description** disable_work_sync() for delayed work items.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h<``struct delayed_work *dwork`` delayed work item to disable h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hdelayed work item to disableh]hdelayed work item to disable}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj*hMhj+ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj*hMhj ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjPh]h Description}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h+disable_work_sync() for delayed work items.h]h+disable_work_sync() for delayed work items.}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh enable_delayed_work (C function)c.enable_delayed_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h5bool enable_delayed_work (struct delayed_work *dwork)h]jx)}(h4bool enable_delayed_work(struct delayed_work *dwork)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(henable_delayed_workh]j)}(henable_delayed_workh]henable_delayed_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.enable_delayed_workasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hdworkh]hdwork}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(hEnable a delayed work itemh]hEnable a delayed work item}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjQhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjljjljjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to enable **Description** enable_work() for delayed work items.h](h)}(h**Parameters**h]j)}(hjvh]h Parameters}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjpubjS)}(hhh]jX)}(h;``struct delayed_work *dwork`` delayed work item to enable h](j^)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hdelayed work item to enableh]hdelayed work item to enable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjpubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjpubh)}(h%enable_work() for delayed work items.h]h%enable_work() for delayed work items.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjpubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh!schedule_on_each_cpu (C function)c.schedule_on_each_cpuhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h+int schedule_on_each_cpu (work_func_t func)h]jx)}(h*int schedule_on_each_cpu(work_func_t func)h](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj#hMubj)}(hschedule_on_each_cpuh]j)}(hschedule_on_each_cpuh]hschedule_on_each_cpu}(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhj#hMubj )}(h(work_func_t func)h]j)}(hwork_func_t funch](h)}(hhh]j)}(h work_func_th]h work_func_t}(hjUhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjWmodnameN classnameNjj)}j]j)}jj8sbc.schedule_on_each_cpuasbuh1hhjNubj)}(h h]h }(hjuhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubj)}(hfunch]hfunc}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjJubah}(h]h ]h"]h$]h&]jjuh1j hjhhhj#hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj hhhj#hMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj#hMhj hhubj)}(hhh]h)}(h3execute a function synchronously on each online CPUh]h3execute a function synchronously on each online CPU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhj hhhj#hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX!**Parameters** ``work_func_t func`` the function to call **Description** schedule_on_each_cpu() executes **func** on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow. **Return** 0 on success, -errno on failure.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h*``work_func_t func`` the function to call h](j^)}(h``work_func_t func``h]j)}(hjh]hwork_func_t func}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe function to callh]hthe function to call}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj)h]h Description}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj'ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hschedule_on_each_cpu() executes **func** on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow.h](h schedule_on_each_cpu() executes }(hj?hhhNhNubj)}(h**func**h]hfunc}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubh} on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow.}(hj?hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjbh]hReturn}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h 0 on success, -errno on failure.h]h 0 on success, -errno on failure.}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh'execute_in_process_context (C function)c.execute_in_process_contexthNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hHint execute_in_process_context (work_func_t fn, struct execute_work *ew)h]jx)}(hGint execute_in_process_context(work_func_t fn, struct execute_work *ew)h](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hexecute_in_process_contexth]j)}(hexecute_in_process_contexth]hexecute_in_process_context}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h)(work_func_t fn, struct execute_work *ew)h](j)}(hwork_func_t fnh](h)}(hhh]j)}(h work_func_th]h work_func_t}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.execute_in_process_contextasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hfnh]hfn}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct execute_work *ewh](j~)}(hjh]hstruct}(hj.hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj*ubj)}(h h]h }(hj;hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubh)}(hhh]j)}(h execute_workh]h execute_work}(hjLhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjNmodnameN classnameNjj)}j]jc.execute_in_process_contextasbuh1hhj*ubj)}(h h]h }(hjjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubj)}(hjah]h*}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubj)}(hewh]hew}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h.reliably execute the routine with user contexth]h.reliably execute the routine with user context}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``work_func_t fn`` the function to execute ``struct execute_work *ew`` guaranteed storage for the execute work structure (must be available when the work executes) **Description** Executes the function immediately if process context is available, otherwise schedules the function for delayed execution. **Return** 0 - function was executed 1 - function was scheduled for executionh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h+``work_func_t fn`` the function to execute h](j^)}(h``work_func_t fn``h]j)}(hjh]hwork_func_t fn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe function to executeh]hthe function to execute}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubjX)}(hy``struct execute_work *ew`` guaranteed storage for the execute work structure (must be available when the work executes) h](j^)}(h``struct execute_work *ew``h]j)}(hj)h]hstruct execute_work *ew}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj'ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj#ubjw)}(hhh]h)}(h\guaranteed storage for the execute work structure (must be available when the work executes)h]h\guaranteed storage for the execute work structure (must be available when the work executes)}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj?ubah}(h]h ]h"]h$]h&]uh1jvhj#ubeh}(h]h ]h"]h$]h&]uh1jWhj>hMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjeh]h Description}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hzExecutes the function immediately if process context is available, otherwise schedules the function for delayed execution.h]hzExecutes the function immediately if process context is available, otherwise schedules the function for delayed execution.}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hB0 - function was executed 1 - function was scheduled for executionh]hB0 - function was executed 1 - function was scheduled for execution}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh!free_workqueue_attrs (C function)c.free_workqueue_attrshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h9void free_workqueue_attrs (struct workqueue_attrs *attrs)h]jx)}(h8void free_workqueue_attrs(struct workqueue_attrs *attrs)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hfree_workqueue_attrsh]j)}(hfree_workqueue_attrsh]hfree_workqueue_attrs}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h(struct workqueue_attrs *attrs)h]j)}(hstruct workqueue_attrs *attrsh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj.modnameN classnameNjj)}j]j)}jjsbc.free_workqueue_attrsasbuh1hhj ubj)}(h h]h }(hjLhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(hjah]h*}(hjZhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(hattrsh]hattrs}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(hfree a workqueue_attrsh]hfree a workqueue_attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(h{**Parameters** ``struct workqueue_attrs *attrs`` workqueue_attrs to free **Description** Undo alloc_workqueue_attrs().h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h:``struct workqueue_attrs *attrs`` workqueue_attrs to free h](j^)}(h!``struct workqueue_attrs *attrs``h]j)}(hjh]hstruct workqueue_attrs *attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hworkqueue_attrs to freeh]hworkqueue_attrs to free}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj h]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hUndo alloc_workqueue_attrs().h]hUndo alloc_workqueue_attrs().}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh"alloc_workqueue_attrs (C function)c.alloc_workqueue_attrshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h5struct workqueue_attrs * alloc_workqueue_attrs (void)h]jx)}(h3struct workqueue_attrs *alloc_workqueue_attrs(void)h](j~)}(hjh]hstruct}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjNhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNhhhj_hMubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjsmodnameN classnameNjj)}j]j)}jalloc_workqueue_attrssbc.alloc_workqueue_attrsasbuh1hhjNhhhj_hMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNhhhj_hMubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNhhhj_hMubj)}(halloc_workqueue_attrsh]j)}(hjh]halloc_workqueue_attrs}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjNhhhj_hMubj )}(h(void)h]j)}(hvoidh]j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjNhhhj_hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjJhhhj_hMubah}(h]jEah ](jjeh"]h$]h&]jj)jhuh1jqhj_hMhjGhhubj)}(hhh]h)}(hallocate a workqueue_attrsh]hallocate a workqueue_attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjGhhhj_hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``void`` no arguments **Description** Allocate a new workqueue_attrs, initialize with default settings and return it. **Return** The allocated new workqueue_attr on success. ``NULL`` on failure.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hj7h]hvoid}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj5ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhj1ubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjLhKhjMubah}(h]h ]h"]h$]h&]uh1jvhj1ubeh}(h]h ]h"]h$]h&]uh1jWhjLhKhj.ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjrh]h Description}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubh)}(hOAllocate a new workqueue_attrs, initialize with default settings and return it.h]hOAllocate a new workqueue_attrs, initialize with default settings and return it.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hAThe allocated new workqueue_attr on success. ``NULL`` on failure.h](h-The allocated new workqueue_attr on success. }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh on failure.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhinit_worker_pool (C function)c.init_worker_poolhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h/int init_worker_pool (struct worker_pool *pool)h]jx)}(h.int init_worker_pool(struct worker_pool *pool)h](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hinit_worker_poolh]j)}(hinit_worker_poolh]hinit_worker_pool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj)ubj)}(h h]h }(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjHubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjMmodnameN classnameNjj)}j]j)}jjsbc.init_worker_poolasbuh1hhj)ubj)}(h h]h }(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubj)}(hjah]h*}(hjyhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj%ubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h'initialize a newly zalloc'd worker_poolh]h)initialize a newly zalloc’d worker_pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX]**Parameters** ``struct worker_pool *pool`` worker_pool to initialize **Description** Initialize a newly zalloc'd **pool**. It also allocates **pool->attrs**. **Return** 0 on success, -errno on failure. Even on failure, all fields inside **pool** proper are initialized and put_unbound_pool() can be called on **pool** safely to release it.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h7``struct worker_pool *pool`` worker_pool to initialize h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hworker_pool to initializeh]hworker_pool to initialize}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj,h]h Description}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hIInitialize a newly zalloc'd **pool**. It also allocates **pool->attrs**.h](hInitialize a newly zalloc’d }(hjBhhhNhNubj)}(h**pool**h]hpool}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubh. It also allocates }(hjBhhhNhNubj)}(h**pool->attrs**h]h pool->attrs}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubh.}(hjBhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjwh]hReturn}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjuubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h0 on success, -errno on failure. Even on failure, all fields inside **pool** proper are initialized and put_unbound_pool() can be called on **pool** safely to release it.h](hE0 on success, -errno on failure. Even on failure, all fields inside }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh@ proper are initialized and put_unbound_pool() can be called on }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh safely to release it.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhput_unbound_pool (C function)c.put_unbound_poolhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h0void put_unbound_pool (struct worker_pool *pool)h]jx)}(h/void put_unbound_pool(struct worker_pool *pool)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM5ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM5ubj)}(hput_unbound_poolh]j)}(hput_unbound_poolh]hput_unbound_pool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM5ubj )}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hj;hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj=modnameN classnameNjj)}j]j)}jjsbc.put_unbound_poolasbuh1hhjubj)}(h h]h }(hj[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hpoolh]hpool}(hjvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM5ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM5ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM5hjhhubj)}(hhh]h)}(hput a worker_poolh]hput a worker_pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM5hjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM5ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hXz**Parameters** ``struct worker_pool *pool`` worker_pool to put **Description** Put **pool**. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool(). Should be called with wq_pool_mutex held.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM9hjubjS)}(hhh]jX)}(h0``struct worker_pool *pool`` worker_pool to put h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM6hjubjw)}(hhh]h)}(hworker_pool to puth]hworker_pool to put}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM6hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM6hjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM8hjubh)}(hPut **pool**. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool().h](hPut }(hj2hhhNhNubj)}(h**pool**h]hpool}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2ubh. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool().}(hj2hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM7hjubh)}(h)Should be called with wq_pool_mutex held.h]h)Should be called with wq_pool_mutex held.}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM<hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhget_unbound_pool (C function)c.get_unbound_poolhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hKstruct worker_pool * get_unbound_pool (const struct workqueue_attrs *attrs)h]jx)}(hIstruct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)h](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj~hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~hhhjhMubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jget_unbound_poolsbc.get_unbound_poolasbuh1hhj~hhhjhMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~hhhjhMubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~hhhjhMubj)}(hget_unbound_poolh]j)}(hjh]hget_unbound_pool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhj~hhhjhMubj )}(h%(const struct workqueue_attrs *attrs)h]j)}(h#const struct workqueue_attrs *attrsh](j~)}(hjh]hconst}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj7modnameN classnameNjj)}j]jc.get_unbound_poolasbuh1hhjubj)}(h h]h }(hjShhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjahhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hattrsh]hattrs}(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hj~hhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjzhhhjhMubah}(h]juah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjwhhubj)}(hhh]h)}(h/get a worker_pool with the specified attributesh]h/get a worker_pool with the specified attributes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjwhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``const struct workqueue_attrs *attrs`` the attributes of the worker_pool to get **Description** Obtain a worker_pool which has the same attributes as **attrs**, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one. Should be called with wq_pool_mutex held. **Return** On success, a worker_pool with the same attributes as **attrs**. On failure, ``NULL``.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(hQ``const struct workqueue_attrs *attrs`` the attributes of the worker_pool to get h](j^)}(h'``const struct workqueue_attrs *attrs``h]j)}(hjh]h#const struct workqueue_attrs *attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h(the attributes of the worker_pool to geth]h(the attributes of the worker_pool to get}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hObtain a worker_pool which has the same attributes as **attrs**, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one.h](h6Obtain a worker_pool which has the same attributes as }(hj*hhhNhNubj)}(h **attrs**h]hattrs}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one.}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h)Should be called with wq_pool_mutex held.h]h)Should be called with wq_pool_mutex held.}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hj\h]hReturn}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hVOn success, a worker_pool with the same attributes as **attrs**. On failure, ``NULL``.h](h6On success, a worker_pool with the same attributes as }(hjrhhhNhNubj)}(h **attrs**h]hattrs}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubh. On failure, }(hjrhhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubh.}(hjrhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh wq_calc_pod_cpumask (C function)c.wq_calc_pod_cpumaskhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hAvoid wq_calc_pod_cpumask (struct workqueue_attrs *attrs, int cpu)h]jx)}(h@void wq_calc_pod_cpumask(struct workqueue_attrs *attrs, int cpu)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMDubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMDubj)}(hwq_calc_pod_cpumaskh]j)}(hwq_calc_pod_cpumaskh]hwq_calc_pod_cpumask}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMDubj )}(h((struct workqueue_attrs *attrs, int cpu)h](j)}(hstruct workqueue_attrs *attrsh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj"modnameN classnameNjj)}j]j)}jjsbc.wq_calc_pod_cpumaskasbuh1hhjubj)}(h h]h }(hj@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hattrsh]hattrs}(hj[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint cpuh](j4)}(hinth]hint}(hjthhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjpubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjpubj)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjpubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMDubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMDubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMDhjhhubj)}(hhh]h)}(h'calculate a wq_attrs' cpumask for a podh]h)calculate a wq_attrs’ cpumask for a pod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMDhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMDubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hXK**Parameters** ``struct workqueue_attrs *attrs`` the wq_attrs of the default pwq of the target workqueue ``int cpu`` the target CPU **Description** Calculate the cpumask a workqueue with **attrs** should use on **pod**. The result is stored in **attrs->__pod_cpumask**. If pod affinity is not enabled, **attrs->cpumask** is always used. If enabled and **pod** has online CPUs requested by **attrs**, the returned cpumask is the intersection of the possible CPUs of **pod** and **attrs->cpumask**. The caller is responsible for ensuring that the cpumask of **pod** stays stable.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMHhjubjS)}(hhh](jX)}(hZ``struct workqueue_attrs *attrs`` the wq_attrs of the default pwq of the target workqueue h](j^)}(h!``struct workqueue_attrs *attrs``h]j)}(hjh]hstruct workqueue_attrs *attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMEhjubjw)}(hhh]h)}(h7the wq_attrs of the default pwq of the target workqueueh]h7the wq_attrs of the default pwq of the target workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMEhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMEhjubjX)}(h``int cpu`` the target CPU h](j^)}(h ``int cpu``h]j)}(hj4h]hint cpu}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMFhj.ubjw)}(hhh]h)}(hthe target CPUh]hthe target CPU}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIhMFhjJubah}(h]h ]h"]h$]h&]uh1jvhj.ubeh}(h]h ]h"]h$]h&]uh1jWhjIhMFhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjoh]h Description}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMHhjubh)}(hyCalculate the cpumask a workqueue with **attrs** should use on **pod**. The result is stored in **attrs->__pod_cpumask**.h](h'Calculate the cpumask a workqueue with }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh should use on }(hjhhhNhNubj)}(h**pod**h]hpod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. The result is stored in }(hjhhhNhNubj)}(h**attrs->__pod_cpumask**h]hattrs->__pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMGhjubh)}(hIf pod affinity is not enabled, **attrs->cpumask** is always used. If enabled and **pod** has online CPUs requested by **attrs**, the returned cpumask is the intersection of the possible CPUs of **pod** and **attrs->cpumask**.h](h If pod affinity is not enabled, }(hjhhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is always used. If enabled and }(hjhhhNhNubj)}(h**pod**h]hpod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh has online CPUs requested by }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhC, the returned cpumask is the intersection of the possible CPUs of }(hjhhhNhNubj)}(h**pod**h]hpod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMJhjubh)}(hPThe caller is responsible for ensuring that the cpumask of **pod** stays stable.h](h;The caller is responsible for ensuring that the cpumask of }(hj3hhhNhNubj)}(h**pod**h]hpod}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubh stays stable.}(hj3hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMNhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh"apply_workqueue_attrs (C function)c.apply_workqueue_attrshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h\int apply_workqueue_attrs (struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h]jx)}(h[int apply_workqueue_attrs(struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h](j4)}(hinth]hint}(hjthhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjphhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjphhhjhMubj)}(happly_workqueue_attrsh]j)}(happly_workqueue_attrsh]happly_workqueue_attrs}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjphhhjhMubj )}(hB(struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h](j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.apply_workqueue_attrsasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hwqh]hwq}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(h#const struct workqueue_attrs *attrsh](j~)}(hjh]hconst}(hj#hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj~)}(hjh]hstruct}(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hj\hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjYubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj^modnameN classnameNjj)}j]jc.apply_workqueue_attrsasbuh1hhjubj)}(h h]h }(hjzhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hattrsh]hattrs}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjphhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjlhhhjhMubah}(h]jgah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjihhubj)}(hhh]h)}(h1apply new workqueue_attrs to an unbound workqueueh]h1apply new workqueue_attrs to an unbound workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjihhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` the target workqueue ``const struct workqueue_attrs *attrs`` the workqueue_attrs to apply, allocated with alloc_workqueue_attrs() **Description** Apply **attrs** to an unbound workqueue **wq**. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in **attrs->cpumask** so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq. Performs GFP_KERNEL allocations. **Return** 0 on success and -errno on failure.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h5``struct workqueue_struct *wq`` the target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubjX)}(hm``const struct workqueue_attrs *attrs`` the workqueue_attrs to apply, allocated with alloc_workqueue_attrs() h](j^)}(h'``const struct workqueue_attrs *attrs``h]j)}(hj9h]h#const struct workqueue_attrs *attrs}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubjw)}(hhh]h)}(hDthe workqueue_attrs to apply, allocated with alloc_workqueue_attrs()h]hDthe workqueue_attrs to apply, allocated with alloc_workqueue_attrs()}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjNhMhjOubah}(h]h ]h"]h$]h&]uh1jvhj3ubeh}(h]h ]h"]h$]h&]uh1jWhjNhMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjth]h Description}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hXqApply **attrs** to an unbound workqueue **wq**. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in **attrs->cpumask** so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq.h](hApply }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to an unbound workqueue }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh\. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in }(hjhhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h Performs GFP_KERNEL allocations.h]h Performs GFP_KERNEL allocations.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubh)}(h#0 on success and -errno on failure.h]h#0 on success and -errno on failure.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh"unbound_wq_update_pwq (C function)c.unbound_wq_update_pwqhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hAvoid unbound_wq_update_pwq (struct workqueue_struct *wq, int cpu)h]jx)}(h@void unbound_wq_update_pwq(struct workqueue_struct *wq, int cpu)h](j4)}(hvoidh]hvoid}(hj%hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj!hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj!hhhj3hMubj)}(hunbound_wq_update_pwqh]j)}(hunbound_wq_update_pwqh]hunbound_wq_update_pwq}(hjFhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubah}(h]h ](jjeh"]h$]h&]jjuh1jhj!hhhj3hMubj )}(h&(struct workqueue_struct *wq, int cpu)h](j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj^ubj)}(h h]h }(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjHsbc.unbound_wq_update_pwqasbuh1hhj^ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^ubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjZubj)}(hint cpuh](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjZubeh}(h]h ]h"]h$]h&]jjuh1j hj!hhhj3hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhj3hMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj3hMhjhhubj)}(hhh]h)}(h%update a pwq slot for CPU hot[un]plugh]h%update a pwq slot for CPU hot[un]plug}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhj3hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj2jj2jjjuh1jlhhhjJhNhNubj)}(hXj**Parameters** ``struct workqueue_struct *wq`` the target workqueue ``int cpu`` the CPU to update the pwq slot for **Description** This function is to be called from ``CPU_DOWN_PREPARE``, ``CPU_ONLINE`` and ``CPU_DOWN_FAILED``. **cpu** is in the same pod of the CPU being hot[un]plugged. If pod affinity can't be adjusted due to memory allocation failure, it falls back to **wq->dfl_pwq** which may not be optimal but is always correct. Note that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it's the user's responsibility to flush the work item from CPU_DOWN_PREPARE.h](h)}(h**Parameters**h]j)}(hj<h]h Parameters}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj6ubjS)}(hhh](jX)}(h5``struct workqueue_struct *wq`` the target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj[h]hstruct workqueue_struct *wq}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjUubjw)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhjphMhjqubah}(h]h ]h"]h$]h&]uh1jvhjUubeh}(h]h ]h"]h$]h&]uh1jWhjphMhjRubjX)}(h/``int cpu`` the CPU to update the pwq slot for h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h"the CPU to update the pwq slot forh]h"the CPU to update the pwq slot for}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjRubeh}(h]h ]h"]h$]h&]uh1jRhj6ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj6ubh)}(hThis function is to be called from ``CPU_DOWN_PREPARE``, ``CPU_ONLINE`` and ``CPU_DOWN_FAILED``. **cpu** is in the same pod of the CPU being hot[un]plugged.h](h#This function is to be called from }(hjhhhNhNubj)}(h``CPU_DOWN_PREPARE``h]hCPU_DOWN_PREPARE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h``CPU_ONLINE``h]h CPU_ONLINE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h``CPU_DOWN_FAILED``h]hCPU_DOWN_FAILED}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. }(hjhhhNhNubj)}(h**cpu**h]hcpu}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh4 is in the same pod of the CPU being hot[un]plugged.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj6ubh)}(hIf pod affinity can't be adjusted due to memory allocation failure, it falls back to **wq->dfl_pwq** which may not be optimal but is always correct.h](hWIf pod affinity can’t be adjusted due to memory allocation failure, it falls back to }(hj<hhhNhNubj)}(h**wq->dfl_pwq**h]h wq->dfl_pwq}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubh0 which may not be optimal but is always correct.}(hj<hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM"hj6ubh)}(hXNote that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it's the user's responsibility to flush the work item from CPU_DOWN_PREPARE.h]hXNote that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it’s the user’s responsibility to flush the work item from CPU_DOWN_PREPARE.}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM%hj6ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh!wq_adjust_max_active (C function)c.wq_adjust_max_activehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h7void wq_adjust_max_active (struct workqueue_struct *wq)h]jx)}(h6void wq_adjust_max_active(struct workqueue_struct *wq)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hwq_adjust_max_activeh]j)}(hwq_adjust_max_activeh]hwq_adjust_max_active}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.wq_adjust_max_activeasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hwqh]hwq}(hj"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h/update a wq's max_active to the current settingh]h1update a wq’s max_active to the current setting}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjIhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjdjjdjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` target workqueue **Description** If **wq** isn't freezing, set **wq->max_active** to the saved_max_active and activate inactive work items accordingly. If **wq** is freezing, clear **wq->max_active** to zero.h](h)}(h**Parameters**h]j)}(hjnh]h Parameters}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhubjS)}(hhh]jX)}(h1``struct workqueue_struct *wq`` target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubah}(h]h ]h"]h$]h&]uh1jRhjhubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhubh)}(hIf **wq** isn't freezing, set **wq->max_active** to the saved_max_active and activate inactive work items accordingly. If **wq** is freezing, clear **wq->max_active** to zero.h](hIf }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh isn’t freezing, set }(hjhhhNhNubj)}(h**wq->max_active**h]hwq->max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhJ to the saved_max_active and activate inactive work items accordingly. If }(hjhhhNhNubj)}(h**wq**h]hwq}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is freezing, clear }(hjhhhNhNubj)}(h**wq->max_active**h]hwq->max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to zero.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhdestroy_workqueue (C function)c.destroy_workqueuehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h4void destroy_workqueue (struct workqueue_struct *wq)h]jx)}(h3void destroy_workqueue(struct workqueue_struct *wq)h](j4)}(hvoidh]hvoid}(hjUhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjQhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQhhhjchMubj)}(hdestroy_workqueueh]j)}(hdestroy_workqueueh]hdestroy_workqueue}(hjvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjrubah}(h]h ](jjeh"]h$]h&]jjuh1jhjQhhhjchMubj )}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjxsbc.destroy_workqueueasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjQhhhjchMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjMhhhjchMubah}(h]jHah ](jjeh"]h$]h&]jj)jhuh1jqhjchMhjJhhubj)}(hhh]h)}(hsafely terminate a workqueueh]hsafely terminate a workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjJhhhjchMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj-jj-jjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` target workqueue **Description** Safely destroy a workqueue. All work currently pending will be done first. This function does NOT guarantee that non-pending work that has been submitted with queue_delayed_work() and similar functions will be done before destroying the workqueue. The fundamental problem is that, currently, the workqueue has no way of accessing non-pending delayed_work. delayed_work is only linked on the timer-side. All delayed_work must, therefore, be canceled before calling this function. TODO: It would be better if the problem described above wouldn't exist and destroy_workqueue() would cleanly cancel all pending and non-pending delayed_work.h](h)}(h**Parameters**h]j)}(hj7h]h Parameters}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj5ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj1ubjS)}(hhh]jX)}(h1``struct workqueue_struct *wq`` target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjVh]hstruct workqueue_struct *wq}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjPubjw)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhjkhMhjlubah}(h]h ]h"]h$]h&]uh1jvhjPubeh}(h]h ]h"]h$]h&]uh1jWhjkhMhjMubah}(h]h ]h"]h$]h&]uh1jRhj1ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj1ubh)}(hJSafely destroy a workqueue. All work currently pending will be done first.h]hJSafely destroy a workqueue. All work currently pending will be done first.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj1ubh)}(hXThis function does NOT guarantee that non-pending work that has been submitted with queue_delayed_work() and similar functions will be done before destroying the workqueue. The fundamental problem is that, currently, the workqueue has no way of accessing non-pending delayed_work. delayed_work is only linked on the timer-side. All delayed_work must, therefore, be canceled before calling this function.h]hXThis function does NOT guarantee that non-pending work that has been submitted with queue_delayed_work() and similar functions will be done before destroying the workqueue. The fundamental problem is that, currently, the workqueue has no way of accessing non-pending delayed_work. delayed_work is only linked on the timer-side. All delayed_work must, therefore, be canceled before calling this function.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj1ubh)}(hTODO: It would be better if the problem described above wouldn't exist and destroy_workqueue() would cleanly cancel all pending and non-pending delayed_work.h]hTODO: It would be better if the problem described above wouldn’t exist and destroy_workqueue() would cleanly cancel all pending and non-pending delayed_work.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj1ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh%workqueue_set_max_active (C function)c.workqueue_set_max_activehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hKvoid workqueue_set_max_active (struct workqueue_struct *wq, int max_active)h]jx)}(hJvoid workqueue_set_max_active(struct workqueue_struct *wq, int max_active)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM.ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM.ubj)}(hworkqueue_set_max_activeh]j)}(hworkqueue_set_max_activeh]hworkqueue_set_max_active}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM.ubj )}(h-(struct workqueue_struct *wq, int max_active)h](j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj-ubj)}(h h]h }(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjQmodnameN classnameNjj)}j]j)}jjsbc.workqueue_set_max_activeasbuh1hhj-ubj)}(h h]h }(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubj)}(hjah]h*}(hj}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj)ubj)}(hint max_activeh](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h max_activeh]h max_active}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj)ubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM.ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM.ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM.hjhhubj)}(hhh]h)}(h adjust max_active of a workqueueh]h adjust max_active of a workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM.hjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM.ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` target workqueue ``int max_active`` new max_active value. **Description** Set max_active of **wq** to **max_active**. See the alloc_workqueue() function comment. **Context** Don't call from IRQ context.h](h)}(h**Parameters**h]j)}(hj h]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM2hjubjS)}(hhh](jX)}(h1``struct workqueue_struct *wq`` target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj*h]hstruct workqueue_struct *wq}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj(ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM/hj$ubjw)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1hhj?hM/hj@ubah}(h]h ]h"]h$]h&]uh1jvhj$ubeh}(h]h ]h"]h$]h&]uh1jWhj?hM/hj!ubjX)}(h)``int max_active`` new max_active value. h](j^)}(h``int max_active``h]j)}(hjch]hint max_active}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM0hj]ubjw)}(hhh]h)}(hnew max_active value.h]hnew max_active value.}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjxhM0hjyubah}(h]h ]h"]h$]h&]uh1jvhj]ubeh}(h]h ]h"]h$]h&]uh1jWhjxhM0hj!ubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM2hjubh)}(hWSet max_active of **wq** to **max_active**. See the alloc_workqueue() function comment.h](hSet max_active of }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh-. See the alloc_workqueue() function comment.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM1hjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM4hjubh)}(hDon't call from IRQ context.h]hDon’t call from IRQ context.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM5hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh%workqueue_set_min_active (C function)c.workqueue_set_min_activehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hKvoid workqueue_set_min_active (struct workqueue_struct *wq, int min_active)h]jx)}(hJvoid workqueue_set_min_active(struct workqueue_struct *wq, int min_active)h](j4)}(hvoidh]hvoid}(hj.hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj*hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMPubj)}(h h]h }(hj=hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*hhhj<hMPubj)}(hworkqueue_set_min_activeh]j)}(hworkqueue_set_min_activeh]hworkqueue_set_min_active}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKubah}(h]h ](jjeh"]h$]h&]jjuh1jhj*hhhj<hMPubj )}(h-(struct workqueue_struct *wq, int min_active)h](j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjgubj)}(h h]h }(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjQsbc.workqueue_set_min_activeasbuh1hhjgubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjcubj)}(hint min_activeh](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h min_activeh]h min_active}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjcubeh}(h]h ]h"]h$]h&]jjuh1j hj*hhhj<hMPubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj&hhhj<hMPubah}(h]j!ah ](jjeh"]h$]h&]jj)jhuh1jqhj<hMPhj#hhubj)}(hhh]h)}(h)adjust min_active of an unbound workqueueh]h)adjust min_active of an unbound workqueue}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMPhj hhubah}(h]h ]h"]h$]h&]uh1jhj#hhhj<hMPubeh}(h]h ](jfunctioneh"]h$]h&]jjjj;jj;jjjuh1jlhhhjJhNhNubj)}(hX(**Parameters** ``struct workqueue_struct *wq`` target unbound workqueue ``int min_active`` new min_active value **Description** Set min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is ``WQ_DFL_MIN_ACTIVE`` by default. Use this function to adjust the min_active value between 0 and the current max_active.h](h)}(h**Parameters**h]j)}(hjEh]h Parameters}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMThj?ubjS)}(hhh](jX)}(h9``struct workqueue_struct *wq`` target unbound workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjdh]hstruct workqueue_struct *wq}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMQhj^ubjw)}(hhh]h)}(htarget unbound workqueueh]htarget unbound workqueue}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjyhMQhjzubah}(h]h ]h"]h$]h&]uh1jvhj^ubeh}(h]h ]h"]h$]h&]uh1jWhjyhMQhj[ubjX)}(h(``int min_active`` new min_active value h](j^)}(h``int min_active``h]j)}(hjh]hint min_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMRhjubjw)}(hhh]h)}(hnew min_active valueh]hnew min_active value}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMRhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMRhj[ubeh}(h]h ]h"]h$]h&]uh1jRhj?ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMThj?ubh)}(hXHSet min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is ``WQ_DFL_MIN_ACTIVE`` by default.h](hX'Set min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is }(hjhhhNhNubj)}(h``WQ_DFL_MIN_ACTIVE``h]hWQ_DFL_MIN_ACTIVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh by default.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMShj?ubh)}(hVUse this function to adjust the min_active value between 0 and the current max_active.h]hVUse this function to adjust the min_active value between 0 and the current max_active.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMYhj?ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhcurrent_work (C function)c.current_workhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h(struct work_struct * current_work (void)h]jx)}(h&struct work_struct *current_work(void)h](j~)}(hjh]hstruct}(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj:hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMkubj)}(h h]h }(hjLhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:hhhjKhMkubh)}(hhh]j)}(h work_structh]h work_struct}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjZubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj_modnameN classnameNjj)}j]j)}j current_worksbc.current_workasbuh1hhj:hhhjKhMkubj)}(h h]h }(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:hhhjKhMkubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:hhhjKhMkubj)}(h current_workh]j)}(hj{h]h current_work}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhj:hhhjKhMkubj )}(h(void)h]j)}(hvoidh]j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hj:hhhjKhMkubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj6hhhjKhMkubah}(h]j1ah ](jjeh"]h$]h&]jj)jhuh1jqhjKhMkhj3hhubj)}(hhh]h)}(h'retrieve ``current`` task's work structh](h retrieve }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh task’s work struct}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMkhjhhubah}(h]h ]h"]h$]h&]uh1jhj3hhhjKhMkubeh}(h]h ](jfunctioneh"]h$]h&]jjjj jj jjjuh1jlhhhjJhNhNubj)}(hX'**Parameters** ``void`` no arguments **Description** Determine if ``current`` task is a workqueue worker and what it's working on. Useful to find out the context that the ``current`` task is running in. **Return** work struct if ``current`` task is a workqueue worker, ``NULL`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMohjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hj5h]hvoid}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhj/ubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjJhKhjKubah}(h]h ]h"]h$]h&]uh1jvhj/ubeh}(h]h ]h"]h$]h&]uh1jWhjJhKhj,ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjph]h Description}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubh)}(hDetermine if ``current`` task is a workqueue worker and what it's working on. Useful to find out the context that the ``current`` task is running in.h](h Determine if }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh` task is a workqueue worker and what it’s working on. Useful to find out the context that the }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh task is running in.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMlhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMohjubh)}(hJwork struct if ``current`` task is a workqueue worker, ``NULL`` otherwise.h](hwork struct if }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh task is a workqueue worker, }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMphjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh)current_is_workqueue_rescuer (C function)c.current_is_workqueue_rescuerhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h(bool current_is_workqueue_rescuer (void)h]jx)}(h'bool current_is_workqueue_rescuer(void)h](j4)}(hj&h]hbool}(hj$hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM{ubj)}(h h]h }(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhj1hM{ubj)}(hcurrent_is_workqueue_rescuerh]j)}(hcurrent_is_workqueue_rescuerh]hcurrent_is_workqueue_rescuer}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@ubah}(h]h ](jjeh"]h$]h&]jjuh1jhj hhhj1hM{ubj )}(h(void)h]j)}(hvoidh]j4)}(hvoidh]hvoid}(hj`hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj\ubah}(h]h ]h"]h$]h&]noemphjjuh1jhjXubah}(h]h ]h"]h$]h&]jjuh1j hj hhhj1hM{ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhj1hM{ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj1hM{hjhhubj)}(hhh]h)}(h!is ``current`` workqueue rescuer?h](his }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh workqueue rescuer?}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM{hjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhj1hM{ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``void`` no arguments **Description** Determine whether ``current`` is a workqueue rescuer. Can be used from work functions to determine whether it's being run off the rescuer task. **Return** ``true`` if ``current`` is a workqueue rescuer. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubh)}(hDetermine whether ``current`` is a workqueue rescuer. Can be used from work functions to determine whether it's being run off the rescuer task.h](hDetermine whether }(hj.hhhNhNubj)}(h ``current``h]hcurrent}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubhu is a workqueue rescuer. Can be used from work functions to determine whether it’s being run off the rescuer task.}(hj.hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM|hjubh)}(h **Return**h]j)}(hjQh]hReturn}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hD``true`` if ``current`` is a workqueue rescuer. ``false`` otherwise.h](j)}(h``true``h]htrue}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubh if }(hjghhhNhNubj)}(h ``current``h]hcurrent}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubh is a workqueue rescuer. }(hjghhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubh otherwise.}(hjghhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh workqueue_congested (C function)c.workqueue_congestedhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h?bool workqueue_congested (int cpu, struct workqueue_struct *wq)h]jx)}(h>bool workqueue_congested(int cpu, struct workqueue_struct *wq)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hworkqueue_congestedh]j)}(hworkqueue_congestedh]hworkqueue_congested}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h&(int cpu, struct workqueue_struct *wq)h](j)}(hint cpuh](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj5ubj)}(h h]h }(hjFhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjWhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjTubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjYmodnameN classnameNjj)}j]j)}jjsbc.workqueue_congestedasbuh1hhj5ubj)}(h h]h }(hjwhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj5ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h%test whether a workqueue is congestedh]h%test whether a workqueue is congested}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``int cpu`` CPU in question ``struct workqueue_struct *wq`` target workqueue **Description** Test whether **wq**'s cpu workqueue for **cpu** is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging. If **cpu** is WORK_CPU_UNBOUND, the test is performed on the local CPU. With the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn't mean that the workqueue is contested on any other CPUs. **Return** ``true`` if congested, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h``int cpu`` CPU in question h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hCPU in questionh]hCPU in question}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubjX)}(h1``struct workqueue_struct *wq`` target workqueue h](j^)}(h``struct workqueue_struct *wq``h]j)}(hj6h]hstruct workqueue_struct *wq}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj0ubjw)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjKhMhjLubah}(h]h ]h"]h$]h&]uh1jvhj0ubeh}(h]h ]h"]h$]h&]uh1jWhjKhMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjqh]h Description}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hTest whether **wq**'s cpu workqueue for **cpu** is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.h](h Test whether }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh’s cpu workqueue for }(hjhhhNhNubj)}(h**cpu**h]hcpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hGIf **cpu** is WORK_CPU_UNBOUND, the test is performed on the local CPU.h](hIf }(hjhhhNhNubj)}(h**cpu**h]hcpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh= is WORK_CPU_UNBOUND, the test is performed on the local CPU.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hWith the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn't mean that the workqueue is contested on any other CPUs.h]hWith the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn’t mean that the workqueue is contested on any other CPUs.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h+``true`` if congested, ``false`` otherwise.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if congested, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhwork_busy (C function) c.work_busyhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h1unsigned int work_busy (struct work_struct *work)h]jx)}(h0unsigned int work_busy(struct work_struct *work)h](j4)}(hunsignedh]hunsigned}(hjQhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjMhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMhhhj_hMubj4)}(hinth]hint}(hjnhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjMhhhj_hMubj)}(h h]h }(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMhhhj_hMubj)}(h work_busyh]j)}(h work_busyh]h work_busy}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjMhhhj_hMubj )}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsb c.work_busyasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjMhhhj_hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjIhhhj_hMubah}(h]jDah ](jjeh"]h$]h&]jj)jhuh1jqhj_hMhjFhhubj)}(hhh]h)}(h3test whether a work is currently pending or runningh]h3test whether a work is currently pending or running}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj*hhubah}(h]h ]h"]h$]h&]uh1jhjFhhhj_hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjEjjEjjjuh1jlhhhjJhNhNubj)}(hXD**Parameters** ``struct work_struct *work`` the work to be tested **Description** Test whether **work** is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging. **Return** OR'd bitmask of WORK_BUSY_* bits.h](h)}(h**Parameters**h]j)}(hjOh]h Parameters}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjIubjS)}(hhh]jX)}(h3``struct work_struct *work`` the work to be tested h](j^)}(h``struct work_struct *work``h]j)}(hjnh]hstruct work_struct *work}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhubjw)}(hhh]h)}(hthe work to be testedh]hthe work to be tested}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjhubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjeubah}(h]h ]h"]h$]h&]uh1jRhjIubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjIubh)}(hTest whether **work** is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.h](h Test whether }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjIubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjIubh)}(h!OR'd bitmask of WORK_BUSY_* bits.h]h#OR’d bitmask of WORK_BUSY_* bits.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjIubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhset_worker_desc (C function)c.set_worker_deschNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h+void set_worker_desc (const char *fmt, ...)h]jx)}(h*void set_worker_desc(const char *fmt, ...)h](j4)}(hvoidh]hvoid}(hj'hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj#hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj#hhhj5hMubj)}(hset_worker_desch]j)}(hset_worker_desch]hset_worker_desc}(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubah}(h]h ](jjeh"]h$]h&]jjuh1jhj#hhhj5hMubj )}(h(const char *fmt, ...)h](j)}(hconst char *fmth](j~)}(hjh]hconst}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj`ubj)}(h h]h }(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`ubj4)}(hcharh]hchar}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj`ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`ubj)}(hfmth]hfmt}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj\ubj)}(h...h]j)}(hjh]h...}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhj\ubeh}(h]h ]h"]h$]h&]jjuh1j hj#hhhj5hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhj5hMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj5hMhjhhubj)}(hhh]h)}(h)set description for the current work itemh]h)set description for the current work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhj5hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``const char *fmt`` printf-style format string ``...`` arguments for the format string **Description** This function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing '\0'.h](h)}(h**Parameters**h]j)}(hj h]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h/``const char *fmt`` printf-style format string h](j^)}(h``const char *fmt``h]j)}(hj+h]hconst char *fmt}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj%ubjw)}(hhh]h)}(hprintf-style format stringh]hprintf-style format string}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj@hMhjAubah}(h]h ]h"]h$]h&]uh1jvhj%ubeh}(h]h ]h"]h$]h&]uh1jWhj@hMhj"ubjX)}(h(``...`` arguments for the format string h](j^)}(h``...``h]j)}(hjdh]h...}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj^ubjw)}(hhh]h)}(harguments for the format stringh]harguments for the format string}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjyhMhjzubah}(h]h ]h"]h$]h&]uh1jvhj^ubeh}(h]h ]h"]h$]h&]uh1jWhjyhMhj"ubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hXThis function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing '\0'.h]hXThis function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing ‘0’.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhprint_worker_info (C function)c.print_worker_infohNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hFvoid print_worker_info (const char *log_lvl, struct task_struct *task)h]jx)}(hEvoid print_worker_info(const char *log_lvl, struct task_struct *task)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hprint_worker_infoh]j)}(hprint_worker_infoh]hprint_worker_info}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h/(const char *log_lvl, struct task_struct *task)h](j)}(hconst char *log_lvlh](j~)}(hjh]hconst}(hj!hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hj.hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj4)}(hcharh]hchar}(hj<hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjXhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hlog_lvlh]hlog_lvl}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct task_struct *taskh](j~)}(hjh]hstruct}(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjzubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubh)}(hhh]j)}(h task_structh]h task_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.print_worker_infoasbuh1hhjzubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubj)}(htaskh]htask}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h,print out worker information and descriptionh]h,print out worker information and description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``const char *log_lvl`` the log level to use when printing ``struct task_struct *task`` target task **Description** If **task** is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item. This function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn't synchronized and may print out mixups or garbages of limited length.h](h)}(h**Parameters**h]j)}(hj#h]h Parameters}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h;``const char *log_lvl`` the log level to use when printing h](j^)}(h``const char *log_lvl``h]j)}(hjBh]hconst char *log_lvl}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj<ubjw)}(hhh]h)}(h"the log level to use when printingh]h"the log level to use when printing}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjWhMhjXubah}(h]h ]h"]h$]h&]uh1jvhj<ubeh}(h]h ]h"]h$]h&]uh1jWhjWhMhj9ubjX)}(h)``struct task_struct *task`` target task h](j^)}(h``struct task_struct *task``h]j)}(hj{h]hstruct task_struct *task}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjuubjw)}(hhh]h)}(h target taskh]h target task}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjuubeh}(h]h ]h"]h$]h&]uh1jWhjhMhj9ubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hIf **task** is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item.h](hIf }(hjhhhNhNubj)}(h**task**h]htask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hThis function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn't synchronized and may print out mixups or garbages of limited length.h]hThis function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn’t synchronized and may print out mixups or garbages of limited length.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhshow_one_workqueue (C function)c.show_one_workqueuehNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h5void show_one_workqueue (struct workqueue_struct *wq)h]jx)}(h4void show_one_workqueue(struct workqueue_struct *wq)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj+hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj*hMubj)}(hshow_one_workqueueh]j)}(hshow_one_workqueueh]hshow_one_workqueue}(hj=hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj9ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhj*hMubj )}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjYhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjUubj)}(h h]h }(hjfhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjwhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjymodnameN classnameNjj)}j]j)}jj?sbc.show_one_workqueueasbuh1hhjUubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjQubah}(h]h ]h"]h$]h&]jjuh1j hjhhhj*hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhj*hMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj*hMhjhhubj)}(hhh]h)}(h!dump state of specified workqueueh]h!dump state of specified workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhj*hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hW**Parameters** ``struct workqueue_struct *wq`` workqueue whose state will be printedh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(hE``struct workqueue_struct *wq`` workqueue whose state will be printedh](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(h%workqueue whose state will be printedh]h%workqueue whose state will be printed}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj3ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj2hMhjubah}(h]h ]h"]h$]h&]uh1jRhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh!show_one_worker_pool (C function)c.show_one_worker_poolhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h4void show_one_worker_pool (struct worker_pool *pool)h]jx)}(h3void show_one_worker_pool(struct worker_pool *pool)h](j4)}(hvoidh]hvoid}(hjwhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjshhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjshhhjhMubj)}(hshow_one_worker_poolh]j)}(hshow_one_worker_poolh]hshow_one_worker_pool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjshhhjhMubj )}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.show_one_worker_poolasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hpoolh]hpool}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjshhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjohhhjhMubah}(h]jjah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjlhhubj)}(hhh]h)}(h#dump state of specified worker poolh]h#dump state of specified worker pool}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj4hhubah}(h]h ]h"]h$]h&]uh1jhjlhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjOjjOjjjuh1jlhhhjJhNhNubj)}(hV**Parameters** ``struct worker_pool *pool`` worker pool whose state will be printedh](h)}(h**Parameters**h]j)}(hjYh]h Parameters}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjWubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjSubjS)}(hhh]jX)}(hD``struct worker_pool *pool`` worker pool whose state will be printedh](j^)}(h``struct worker_pool *pool``h]j)}(hjxh]hstruct worker_pool *pool}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjrubjw)}(hhh]h)}(h'worker pool whose state will be printedh]h'worker pool whose state will be printed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubah}(h]h ]h"]h$]h&]uh1jvhjrubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjoubah}(h]h ]h"]h$]h&]uh1jRhjSubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh show_all_workqueues (C function)c.show_all_workqueueshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hvoid show_all_workqueues (void)h]jx)}(hvoid show_all_workqueues(void)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM ubj)}(hshow_all_workqueuesh]j)}(hshow_all_workqueuesh]hshow_all_workqueues}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM ubj )}(h(void)h]j)}(hvoidh]j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj ubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM hjhhubj)}(hhh]h)}(hdump workqueue stateh]hdump workqueue state}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj6hhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjQjjQjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``void`` no arguments **Description** Called from a sysrq handler and prints out all busy workqueues and pools.h](h)}(h**Parameters**h]j)}(hj[h]h Parameters}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjUubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjzh]hvoid}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjtubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jvhjtubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjqubah}(h]h ]h"]h$]h&]uh1jRhjUubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjUubh)}(hICalled from a sysrq handler and prints out all busy workqueues and pools.h]hICalled from a sysrq handler and prints out all busy workqueues and pools.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hjUubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh&show_freezable_workqueues (C function)c.show_freezable_workqueueshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h%void show_freezable_workqueues (void)h]jx)}(h$void show_freezable_workqueues(void)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM!ubj)}(h h]h }(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM!ubj)}(hshow_freezable_workqueuesh]j)}(hshow_freezable_workqueuesh]hshow_freezable_workqueues}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM!ubj )}(h(void)h]j)}(hvoidh]j4)}(hvoidh]hvoid}(hj7hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj3ubah}(h]h ]h"]h$]h&]noemphjjuh1jhj/ubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM!ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM!ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM!hjhhubj)}(hhh]h)}(hdump freezable workqueue stateh]hdump freezable workqueue state}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM!hj^hhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM!ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjyjjyjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``void`` no arguments **Description** Called from try_to_freeze_tasks() and prints out all freezable workqueues still busy.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM%hj}ubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjubah}(h]h ]h"]h$]h&]uh1jRhj}ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhj}ubh)}(hUCalled from try_to_freeze_tasks() and prints out all freezable workqueues still busy.h]hUCalled from try_to_freeze_tasks() and prints out all freezable workqueues still busy.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM"hj}ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhrebind_workers (C function)c.rebind_workershNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h.void rebind_workers (struct worker_pool *pool)h]jx)}(h-void rebind_workers(struct worker_pool *pool)h](j4)}(hvoidh]hvoid}(hj"hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj0hMubj)}(hrebind_workersh]j)}(hrebind_workersh]hrebind_workers}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhj0hMubj )}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hj_hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hj[ubj)}(h h]h }(hjlhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj[ubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hj}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjEsbc.rebind_workersasbuh1hhj[ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj[ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj[ubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj[ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjWubah}(h]h ]h"]h$]h&]jjuh1j hjhhhj0hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhj0hMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj0hMhjhhubj)}(hhh]h)}(h2rebind all workers of a pool to the associated CPUh]h2rebind all workers of a pool to the associated CPU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhj0hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``struct worker_pool *pool`` pool of interest **Description** **pool->cpu** is coming online. Rebind all workers to the CPU.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h.``struct worker_pool *pool`` pool of interest h](j^)}(h``struct worker_pool *pool``h]j)}(hj#h]hstruct worker_pool *pool}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hpool of interesth]hpool of interest}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj8hMhj9ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj8hMhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj^h]h Description}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h?**pool->cpu** is coming online. Rebind all workers to the CPU.h](j)}(h **pool->cpu**h]h pool->cpu}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubh2 is coming online. Rebind all workers to the CPU.}(hjthhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh,restore_unbound_workers_cpumask (C function)!c.restore_unbound_workers_cpumaskhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hHvoid restore_unbound_workers_cpumask (struct worker_pool *pool, int cpu)h]jx)}(hGvoid restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)h](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hrestore_unbound_workers_cpumaskh]j)}(hrestore_unbound_workers_cpumaskh]hrestore_unbound_workers_cpumask}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h#(struct worker_pool *pool, int cpu)h](j)}(hstruct worker_pool *poolh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsb!c.restore_unbound_workers_cpumaskasbuh1hhjubj)}(h h]h }(hj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hpoolh]hpool}(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint cpuh](j4)}(hinth]hint}(hj`hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj\ubj)}(h h]h }(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\ubj)}(hcpuh]hcpu}(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h"restore cpumask of unbound workersh]h"restore cpumask of unbound workers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct worker_pool *pool`` unbound pool of interest ``int cpu`` the CPU which is coming up **Description** An unbound pool may end up with a cpumask which doesn't have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If **cpu** is in **pool**'s cpumask which didn't have any online CPU before, cpus_allowed of all its workers should be restored.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh](jX)}(h6``struct worker_pool *pool`` unbound pool of interest h](j^)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hunbound pool of interesth]hunbound pool of interest}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjubjX)}(h'``int cpu`` the CPU which is coming up h](j^)}(h ``int cpu``h]j)}(hj h]hint cpu}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjw)}(hhh]h)}(hthe CPU which is coming uph]hthe CPU which is coming up}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj5hMhj6ubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhj5hMhjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj[h]h Description}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hX!An unbound pool may end up with a cpumask which doesn't have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If **cpu** is in **pool**'s cpumask which didn't have any online CPU before, cpus_allowed of all its workers should be restored.h](hAn unbound pool may end up with a cpumask which doesn’t have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If }(hjqhhhNhNubj)}(h**cpu**h]hcpu}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjqubh is in }(hjqhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjqubhk’s cpumask which didn’t have any online CPU before, cpus_allowed of all its workers should be restored.}(hjqhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhwork_on_cpu_key (C function)c.work_on_cpu_keyhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hYlong work_on_cpu_key (int cpu, long (*fn)(void *), void *arg, struct lock_class_key *key)h]jx)}(hWlong work_on_cpu_key(int cpu, long (*fn)(void*), void *arg, struct lock_class_key *key)h](j4)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM\ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM\ubj)}(hwork_on_cpu_keyh]j)}(hwork_on_cpu_keyh]hwork_on_cpu_key}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhM\ubj )}(hC(int cpu, long (*fn)(void*), void *arg, struct lock_class_key *key)h](j)}(hint cpuh](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hlong (*fn)(void*)h](j4)}(hlongh]hlong}(hj6hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj2ubj)}(h h]h }(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(h(h]h(}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(hjah]h*}(hj`hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(hfnh]hfn}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(h)h]h)}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(hjTh]h(}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj2ubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(hj}h]h)}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(h void *argh](j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hargh]harg}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct lock_class_key *keyh](j~)}(hjh]hstruct}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hlock_class_keyh]hlock_class_key}(hj)hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj+modnameN classnameNjj)}j]j)}jjsbc.work_on_cpu_keyasbuh1hhjubj)}(h h]h }(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjWhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hkeyh]hkey}(hjdhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j hjhhhjhM\ubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhM\ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhM\hjhhubj)}(hhh]h)}(h4run a function in thread context on a particular cpuh]h4run a function in thread context on a particular cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM\hjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM\ubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``int cpu`` the cpu to run on ``long (*fn)(void *)`` the function to run ``void *arg`` the function arg ``struct lock_class_key *key`` The lock class key for lock debugging purposes **Description** It is up to the caller to ensure that the cpu doesn't go offline. The caller must not hold any locks which would prevent **fn** from completing. **Return** The value **fn** returns.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM`hjubjS)}(hhh](jX)}(h``int cpu`` the cpu to run on h](j^)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM]hjubjw)}(hhh]h)}(hthe cpu to run onh]hthe cpu to run on}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM]hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM]hjubjX)}(h+``long (*fn)(void *)`` the function to run h](j^)}(h``long (*fn)(void *)``h]j)}(hjh]hlong (*fn)(void *)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM^hjubjw)}(hhh]h)}(hthe function to runh]hthe function to run}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM^hjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhM^hjubjX)}(h``void *arg`` the function arg h](j^)}(h ``void *arg``h]j)}(hjAh]h void *arg}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM_hj;ubjw)}(hhh]h)}(hthe function argh]hthe function arg}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjVhM_hjWubah}(h]h ]h"]h$]h&]uh1jvhj;ubeh}(h]h ]h"]h$]h&]uh1jWhjVhM_hjubjX)}(hN``struct lock_class_key *key`` The lock class key for lock debugging purposes h](j^)}(h``struct lock_class_key *key``h]j)}(hjzh]hstruct lock_class_key *key}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM`hjtubjw)}(hhh]h)}(h.The lock class key for lock debugging purposesh]h.The lock class key for lock debugging purposes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM`hjubah}(h]h ]h"]h$]h&]uh1jvhjtubeh}(h]h ]h"]h$]h&]uh1jWhjhM`hjubeh}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMbhjubh)}(hIt is up to the caller to ensure that the cpu doesn't go offline. The caller must not hold any locks which would prevent **fn** from completing.h](h{It is up to the caller to ensure that the cpu doesn’t go offline. The caller must not hold any locks which would prevent }(hjhhhNhNubj)}(h**fn**h]hfn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh from completing.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMahjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMdhjubh)}(hThe value **fn** returns.h](h The value }(hjhhhNhNubj)}(h**fn**h]hfn}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh returns.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMehjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh$freeze_workqueues_begin (C function)c.freeze_workqueues_beginhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h#void freeze_workqueues_begin (void)h]jx)}(h"void freeze_workqueues_begin(void)h](j4)}(hvoidh]hvoid}(hjEhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjAhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMxubj)}(h h]h }(hjThhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAhhhjShMxubj)}(hfreeze_workqueues_beginh]j)}(hfreeze_workqueues_beginh]hfreeze_workqueues_begin}(hjfhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbubah}(h]h ](jjeh"]h$]h&]jjuh1jhjAhhhjShMxubj )}(h(void)h]j)}(hvoidh]j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj~ubah}(h]h ]h"]h$]h&]noemphjjuh1jhjzubah}(h]h ]h"]h$]h&]jjuh1j hjAhhhjShMxubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj=hhhjShMxubah}(h]j8ah ](jjeh"]h$]h&]jj)jhuh1jqhjShMxhj:hhubj)}(hhh]h)}(hbegin freezing workqueuesh]hbegin freezing workqueues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMxhjhhubah}(h]h ]h"]h$]h&]uh1jhj:hhhjShMxubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX$**Parameters** ``void`` no arguments **Description** Start freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist. **Context** Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM|hjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj(h]h Description}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubh)}(hStart freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist.h]hStart freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist.}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMyhjubh)}(h **Context**h]j)}(hjOh]hContext}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM}hjubh)}(h=Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h]h?Grabs and releases wq_pool_mutex, wq->mutex and pool->lock’s.}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM~hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh#freeze_workqueues_busy (C function)c.freeze_workqueues_busyhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h"bool freeze_workqueues_busy (void)h]jx)}(h!bool freeze_workqueues_busy(void)h](j4)}(hj&h]hbool}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(hfreeze_workqueues_busyh]j)}(hfreeze_workqueues_busyh]hfreeze_workqueues_busy}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMubj )}(h(void)h]j)}(hvoidh]j4)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjhhubj)}(hhh]h)}(h$are freezable workqueues still busy?h]h$are freezable workqueues still busy?}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hXK**Parameters** ``void`` no arguments **Description** Check whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues(). **Context** Grabs and releases wq_pool_mutex. **Return** ``true`` if some freezable workqueues are still busy. ``false`` if freezing is complete.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hj;h]hvoid}(h=j=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhj5ubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhjPhKhjQubah}(h]h ]h"]h$]h&]uh1jvhj5ubeh}(h]h ]h"]h$]h&]uh1jWhjPhKhj2ubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjvh]h Description}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubh)}(hzCheck whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues().h]hzCheck whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues().}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h!Grabs and releases wq_pool_mutex.h]h!Grabs and releases wq_pool_mutex.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(hY``true`` if some freezable workqueues are still busy. ``false`` if freezing is complete.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh/ if some freezable workqueues are still busy. }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if freezing is complete.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jhthaw_workqueues (C function)c.thaw_workqueueshNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hvoid thaw_workqueues (void)h]jx)}(hvoid thaw_workqueues(void)h](j4)}(hvoidh]hvoid}(hj)hhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj%hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%hhhj7hMubj)}(hthaw_workqueuesh]j)}(hthaw_workqueuesh]hthaw_workqueues}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFubah}(h]h ](jjeh"]h$]h&]jjuh1jhj%hhhj7hMubj )}(h(void)h]j)}(hvoidh]j4)}(hvoidh]hvoid}(hjfhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjbubah}(h]h ]h"]h$]h&]noemphjjuh1jhj^ubah}(h]h ]h"]h$]h&]jjuh1j hj%hhhj7hMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhj!hhhj7hMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhj7hMhjhhubj)}(hhh]h)}(hthaw workqueuesh]hthaw workqueues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhj7hMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``void`` no arguments **Description** Thaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists. **Context** Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubjS)}(hhh]jX)}(h``void`` no arguments h](j^)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubjw)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhKhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hj h]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chKhjubh)}(hThaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists.h]hThaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists.}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h **Context**h]j)}(hj3h]hContext}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubh)}(h=Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h]h?Grabs and releases wq_pool_mutex, wq->mutex and pool->lock’s.}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh.workqueue_unbound_exclude_cpumask (C function)#c.workqueue_unbound_exclude_cpumaskhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(hEint workqueue_unbound_exclude_cpumask (cpumask_var_t exclude_cpumask)h]jx)}(hDint workqueue_unbound_exclude_cpumask(cpumask_var_t exclude_cpumask)h](j4)}(hinth]hint}(hjxhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjthhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjthhhjhMubj)}(h!workqueue_unbound_exclude_cpumaskh]j)}(h!workqueue_unbound_exclude_cpumaskh]h!workqueue_unbound_exclude_cpumask}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjthhhjhMubj )}(h(cpumask_var_t exclude_cpumask)h]j)}(hcpumask_var_t exclude_cpumaskh](h)}(hhh]j)}(h cpumask_var_th]h cpumask_var_t}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsb#c.workqueue_unbound_exclude_cpumaskasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hexclude_cpumaskh]hexclude_cpumask}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjthhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjphhhjhMubah}(h]jkah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjmhhubj)}(hhh]h)}(h'Exclude given CPUs from unbound cpumaskh]h'Exclude given CPUs from unbound cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj hhubah}(h]h ]h"]h$]h&]uh1jhjmhhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjj(jj(jjjuh1jlhhhjJhNhNubj)}(h**Parameters** ``cpumask_var_t exclude_cpumask`` the cpumask to be excluded from wq_unbound_cpumask **Description** This function can be called from cpuset code to provide a set of isolated CPUs that should be excluded from wq_unbound_cpumask.h](h)}(h**Parameters**h]j)}(hj2h]h Parameters}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj,ubjS)}(hhh]jX)}(hU``cpumask_var_t exclude_cpumask`` the cpumask to be excluded from wq_unbound_cpumask h](j^)}(h!``cpumask_var_t exclude_cpumask``h]j)}(hjQh]hcpumask_var_t exclude_cpumask}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjKubjw)}(hhh]h)}(h2the cpumask to be excluded from wq_unbound_cpumaskh]h2the cpumask to be excluded from wq_unbound_cpumask}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjfhMhjgubah}(h]h ]h"]h$]h&]uh1jvhjKubeh}(h]h ]h"]h$]h&]uh1jWhjfhMhjHubah}(h]h ]h"]h$]h&]uh1jRhj,ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chM hj,ubh)}(hThis function can be called from cpuset code to provide a set of isolated CPUs that should be excluded from wq_unbound_cpumask.h]hThis function can be called from cpuset code to provide a set of isolated CPUs that should be excluded from wq_unbound_cpumask.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj,ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh*workqueue_set_unbound_cpumask (C function)c.workqueue_set_unbound_cpumaskhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h9int workqueue_set_unbound_cpumask (cpumask_var_t cpumask)h]jx)}(h8int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)h](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMPubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMPubj)}(hworkqueue_set_unbound_cpumaskh]j)}(hworkqueue_set_unbound_cpumaskh]hworkqueue_set_unbound_cpumask}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhjhhhjhMPubj )}(h(cpumask_var_t cpumask)h]j)}(hcpumask_var_t cpumaskh](h)}(hhh]j)}(h cpumask_var_th]h cpumask_var_t}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.workqueue_set_unbound_cpumaskasbuh1hhj ubj)}(h h]h }(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(hcpumaskh]hcpumask}(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hjhhhjhMPubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjhhhjhMPubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jqhjhMPhjhhubj)}(hhh]h)}(h!Set the low-level unbound cpumaskh]h!Set the low-level unbound cpumask}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMPhjfhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMPubeh}(h]h ](jfunctioneh"]h$]h&]jjjjjjjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``cpumask_var_t cpumask`` the cpumask to set **Description** The low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them. **Return** 0 - Success -EINVAL - Invalid **cpumask** -ENOMEM - Failed to allocate memory for attrs or pwqs.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMThjubjS)}(hhh]jX)}(h-``cpumask_var_t cpumask`` the cpumask to set h](j^)}(h``cpumask_var_t cpumask``h]j)}(hjh]hcpumask_var_t cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMQhjubjw)}(hhh]h)}(hthe cpumask to seth]hthe cpumask to set}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMQhjubah}(h]h ]h"]h$]h&]uh1jvhjubeh}(h]h ]h"]h$]h&]uh1jWhjhMQhjubah}(h]h ]h"]h$]h&]uh1jRhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMShjubj)}(hThe low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them. h]h)}(hThe low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them.h](hThe low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the }(hjhhhNhNubj)}(h **cpumask**h]hcpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhE and apply it to all unbound workqueues and updates all pwqs of them.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMRhjubah}(h]h ]h"]h$]h&]uh1jhjhMRhjubh)}(h **Return**h]j)}(hj(h]hReturn}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMVhjubh)}(hf0 - Success -EINVAL - Invalid **cpumask** -ENOMEM - Failed to allocate memory for attrs or pwqs.h](h$0 - Success -EINVAL - Invalid }(hj>hhhNhNubj)}(h **cpumask**h]hcpumask}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubh7 -ENOMEM - Failed to allocate memory for attrs or pwqs.}(hj>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMWhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh%workqueue_sysfs_register (C function)c.workqueue_sysfs_registerhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h:int workqueue_sysfs_register (struct workqueue_struct *wq)h]jx)}(h9int workqueue_sysfs_register(struct workqueue_struct *wq)h](j4)}(hinth]hint}(hjhhhNhNubah}(h]h ]j@ah"]h$]h&]uh1j3hj{hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{hhhjhMubj)}(hworkqueue_sysfs_registerh]j)}(hworkqueue_sysfs_registerh]hworkqueue_sysfs_register}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]jjuh1jhj{hhhjhMubj )}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j~)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1j}hjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjmodnameN classnameNjj)}j]j)}jjsbc.workqueue_sysfs_registerasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hjah]h*}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j hj{hhhjhMubeh}(h]h ]h"]h$]h&]jjjuh1jwjjhjwhhhjhMubah}(h]jrah ](jjeh"]h$]h&]jj)jhuh1jqhjhMhjthhubj)}(hhh]h)}(h!make a workqueue visible in sysfsh]h!make a workqueue visible in sysfs}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj<hhubah}(h]h ]h"]h$]h&]uh1jhjthhhjhMubeh}(h]h ](jfunctioneh"]h$]h&]jjjjWjjWjjjuh1jlhhhjJhNhNubj)}(hX**Parameters** ``struct workqueue_struct *wq`` the workqueue to register **Description** Expose **wq** in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method. Workqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes. **Return** 0 on success, -errno on failure.h](h)}(h**Parameters**h]j)}(hjah]h Parameters}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj[ubjS)}(hhh]jX)}(h:``struct workqueue_struct *wq`` the workqueue to register h](j^)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1j]hV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhjzubjw)}(hhh]h)}(hthe workqueue to registerh]hthe workqueue to register}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jvhjzubeh}(h]h ]h"]h$]h&]uh1jWhjhMhjwubah}(h]h ]h"]h$]h&]uh1jRhj[ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj[ubh)}(hExpose **wq** in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method.h](hExpose }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj[ubh)}(hWorkqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes.h]hWorkqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj[ubh)}(h **Return**h]j)}(hj h]hReturn}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj[ubh)}(h 0 on success, -errno on failure.h]h 0 on success, -errno on failure.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:789: ./kernel/workqueue.chMhj[ubeh}(h]h ] kernelindentah"]h$]h&]uh1jhjJhhhNhNubj\)}(hhh]h}(h]h ]h"]h$]h&]entries](jh'workqueue_sysfs_unregister (C function)c.workqueue_sysfs_unregisterhNtauh1j[hjJhhhNhNubjm)}(hhh](jr)}(h=void workqueue_sysfs_unregister (struct workqueue_struct *wq)h]jx)}(h