sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget&/translations/zh_CN/core-api/workqueuemodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/zh_TW/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/it_IT/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/ja_JP/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/ko_KR/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hPortuguese (Brazilian)}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/pt_BR/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget&/translations/sp_SP/core-api/workqueuemodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhsection)}(hhh](htitle)}(h Workqueueh]h Workqueue}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhh@/var/lib/git/docbuild/linux/Documentation/core-api/workqueue.rsthKubh field_list)}(hhh](hfield)}(hhh](h field_name)}(hDateh]hDate}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhKubh field_body)}(hSeptember, 2010h]h paragraph)}(hhh]hSeptember, 2010}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubah}(h]h ]h"]h$]h&]uh1hhhubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hAuthorh]hAuthor}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhKubh)}(hTejun Heo h]h)}(hjh](h Tejun Heo <}(hjhhhNhNubh reference)}(h tj@kernel.orgh]h tj@kernel.org}(hj$hhhNhNubah}(h]h ]h"]h$]h&]refurimailto:tj@kernel.orguh1j"hjubh>}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1hhjubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hAuthorh]hAuthor}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjJhhhKubh)}(h'Florian Mickler h]h)}(h%Florian Mickler h](hFlorian Mickler <}(hj_hhhNhNubj#)}(hflorian@mickler.orgh]hflorian@mickler.org}(hjghhhNhNubah}(h]h ]h"]h$]h&]refurimailto:florian@mickler.orguh1j"hj_ubh>}(hj_hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj[ubah}(h]h ]h"]h$]h&]uh1hhjJubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubeh}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Introductionh]h Introduction}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK ubh)}(hThere are many cases where an asynchronous process execution context is needed and the workqueue (wq) API is the most commonly used mechanism for such cases.h]hThere are many cases where an asynchronous process execution context is needed and the workqueue (wq) API is the most commonly used mechanism for such cases.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hjhhubh)}(hWhen such an asynchronous execution context is needed, a work item describing which function to execute is put on a queue. An independent thread serves as the asynchronous execution context. The queue is called workqueue and the thread is called worker.h]hWhen such an asynchronous execution context is needed, a work item describing which function to execute is put on a queue. An independent thread serves as the asynchronous execution context. The queue is called workqueue and the thread is called worker.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXWhile there are work items on the workqueue the worker executes the functions associated with the work items one after the other. When there is no work item left on the workqueue the worker becomes idle. When a new work item gets queued, the worker begins executing again.h]hXWhile there are work items on the workqueue the worker executes the functions associated with the work items one after the other. When there is no work item left on the workqueue the worker becomes idle. When a new work item gets queued, the worker begins executing again.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] introductionah ]h"] introductionah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(h"Why Concurrency Managed Workqueue?h]h"Why Concurrency Managed Workqueue?}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXIn the original wq implementation, a multi threaded (MT) wq had one worker thread per CPU and a single threaded (ST) wq had one worker thread system-wide. A single MT wq needed to keep around the same number of workers as the number of CPUs. The kernel grew a lot of MT wq users over the years and with the number of CPU cores continuously rising, some systems saturated the default 32k PID space just booting up.h]hXIn the original wq implementation, a multi threaded (MT) wq had one worker thread per CPU and a single threaded (ST) wq had one worker thread system-wide. A single MT wq needed to keep around the same number of workers as the number of CPUs. The kernel grew a lot of MT wq users over the years and with the number of CPU cores continuously rising, some systems saturated the default 32k PID space just booting up.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXAlthough MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context.h]hXAlthough MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK'hjhhubh)}(hXThe tension between the provided level of concurrency and resource usage also forced its users to make unnecessary tradeoffs like libata choosing to use ST wq for polling PIOs and accepting an unnecessary limitation that no two polling PIOs can progress at the same time. As MT wq don't provide much better concurrency, users which require higher level of concurrency, like async or fscache, had to implement their own thread pool.h]hXThe tension between the provided level of concurrency and resource usage also forced its users to make unnecessary tradeoffs like libata choosing to use ST wq for polling PIOs and accepting an unnecessary limitation that no two polling PIOs can progress at the same time. As MT wq don’t provide much better concurrency, users which require higher level of concurrency, like async or fscache, had to implement their own thread pool.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK/hjhhubh)}(hcConcurrency Managed Workqueue (cmwq) is a reimplementation of wq with focus on the following goals.h]hcConcurrency Managed Workqueue (cmwq) is a reimplementation of wq with focus on the following goals.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK7hjhhubh bullet_list)}(hhh](h list_item)}(h8Maintain compatibility with the original workqueue API. h]h)}(h7Maintain compatibility with the original workqueue API.h]h7Maintain compatibility with the original workqueue API.}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK:hj&ubah}(h]h ]h"]h$]h&]uh1j$hj!hhhhhNubj%)}(hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource. h]h)}(hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource.h]hUse per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource.}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKubah}(h]h ]h"]h$]h&]uh1j$hj!hhhhhNubj%)}(h{Automatically regulate worker pool and level of concurrency so that the API users don't need to worry about such details. h]h)}(hyAutomatically regulate worker pool and level of concurrency so that the API users don't need to worry about such details.h]h{Automatically regulate worker pool and level of concurrency so that the API users don’t need to worry about such details.}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK@hjVubah}(h]h ]h"]h$]h&]uh1j$hj!hhhhhNubeh}(h]h ]h"]h$]h&]bullet*uh1jhhhK:hjhhubeh}(h]!why-concurrency-managed-workqueueah ]h"]"why concurrency managed workqueue?ah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h The Designh]h The Design}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~hhhhhKEubh)}(hiIn order to ease the asynchronous execution of functions a new abstraction, the work item, is introduced.h]hiIn order to ease the asynchronous execution of functions a new abstraction, the work item, is introduced.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKGhj~hhubh)}(hXA work item is a simple struct that holds a pointer to the function that is to be executed asynchronously. Whenever a driver or subsystem wants a function to be executed asynchronously it has to set up a work item pointing to that function and queue that work item on a workqueue.h]hXA work item is a simple struct that holds a pointer to the function that is to be executed asynchronously. Whenever a driver or subsystem wants a function to be executed asynchronously it has to set up a work item pointing to that function and queue that work item on a workqueue.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKJhj~hhubh)}(hKA work item can be executed in either a thread or the BH (softirq) context.h]hKA work item can be executed in either a thread or the BH (softirq) context.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKPhj~hhubh)}(hFor threaded workqueues, special purpose threads, called [k]workers, execute the functions off of the queue, one after the other. If no work is queued, the worker threads become idle. These worker threads are managed in worker-pools.h]hFor threaded workqueues, special purpose threads, called [k]workers, execute the functions off of the queue, one after the other. If no work is queued, the worker threads become idle. These worker threads are managed in worker-pools.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKRhj~hhubh)}(hThe cmwq design differentiates between the user-facing workqueues that subsystems and drivers queue work items on and the backend mechanism which manages worker-pools and processes the queued work items.h]hThe cmwq design differentiates between the user-facing workqueues that subsystems and drivers queue work items on and the backend mechanism which manages worker-pools and processes the queued work items.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKWhj~hhubh)}(hThere are two worker-pools, one for normal work items and the other for high priority ones, for each possible CPU and some extra worker-pools to serve work items queued on unbound workqueues - the number of these backing pools is dynamic.h]hThere are two worker-pools, one for normal work items and the other for high priority ones, for each possible CPU and some extra worker-pools to serve work items queued on unbound workqueues - the number of these backing pools is dynamic.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK[hj~hhubh)}(hX=BH workqueues use the same framework. However, as there can only be one concurrent execution context, there's no need to worry about concurrency. Each per-CPU BH worker pool contains only one pseudo worker which represents the BH execution context. A BH workqueue can be considered a convenience interface to softirq.h]hX?BH workqueues use the same framework. However, as there can only be one concurrent execution context, there’s no need to worry about concurrency. Each per-CPU BH worker pool contains only one pseudo worker which represents the BH execution context. A BH workqueue can be considered a convenience interface to softirq.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK`hj~hhubh)}(hXSubsystems and drivers can create and queue work items through special workqueue API functions as they see fit. They can influence some aspects of the way the work items are executed by setting flags on the workqueue they are putting the work item on. These flags include things like CPU locality, concurrency limits, priority and more. To get a detailed overview refer to the API description of ``alloc_workqueue()`` below.h](hXSubsystems and drivers can create and queue work items through special workqueue API functions as they see fit. They can influence some aspects of the way the work items are executed by setting flags on the workqueue they are putting the work item on. These flags include things like CPU locality, concurrency limits, priority and more. To get a detailed overview refer to the API description of }(hjhhhNhNubhliteral)}(h``alloc_workqueue()``h]halloc_workqueue()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh below.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKfhj~hhubh)}(hXWhen a work item is queued to a workqueue, the target worker-pool is determined according to the queue parameters and workqueue attributes and appended on the shared worklist of the worker-pool. For example, unless specifically overridden, a work item of a bound workqueue will be queued on the worklist of either normal or highpri worker-pool that is associated to the CPU the issuer is running on.h]hXWhen a work item is queued to a workqueue, the target worker-pool is determined according to the queue parameters and workqueue attributes and appended on the shared worklist of the worker-pool. For example, unless specifically overridden, a work item of a bound workqueue will be queued on the worklist of either normal or highpri worker-pool that is associated to the CPU the issuer is running on.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKnhj~hhubh)}(hX#For any thread pool implementation, managing the concurrency level (how many execution contexts are active) is an important issue. cmwq tries to keep the concurrency at a minimal but sufficient level. Minimal to save resources and sufficient in that the system is used at its full capacity.h]hX#For any thread pool implementation, managing the concurrency level (how many execution contexts are active) is an important issue. cmwq tries to keep the concurrency at a minimal but sufficient level. Minimal to save resources and sufficient in that the system is used at its full capacity.}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKuhj~hhubh)}(hXEach worker-pool bound to an actual CPU implements concurrency management by hooking into the scheduler. The worker-pool is notified whenever an active worker wakes up or sleeps and keeps track of the number of the currently runnable workers. Generally, work items are not expected to hog a CPU and consume many cycles. That means maintaining just enough concurrency to prevent work processing from stalling should be optimal. As long as there are one or more runnable workers on the CPU, the worker-pool doesn't start execution of a new work, but, when the last running worker goes to sleep, it immediately schedules a new worker so that the CPU doesn't sit idle while there are pending work items. This allows using a minimal number of workers without losing execution bandwidth.h]hXEach worker-pool bound to an actual CPU implements concurrency management by hooking into the scheduler. The worker-pool is notified whenever an active worker wakes up or sleeps and keeps track of the number of the currently runnable workers. Generally, work items are not expected to hog a CPU and consume many cycles. That means maintaining just enough concurrency to prevent work processing from stalling should be optimal. As long as there are one or more runnable workers on the CPU, the worker-pool doesn’t start execution of a new work, but, when the last running worker goes to sleep, it immediately schedules a new worker so that the CPU doesn’t sit idle while there are pending work items. This allows using a minimal number of workers without losing execution bandwidth.}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK{hj~hhubh)}(hKeeping idle workers around doesn't cost other than the memory space for kthreads, so cmwq holds onto idle ones for a while before killing them.h]hKeeping idle workers around doesn’t cost other than the memory space for kthreads, so cmwq holds onto idle ones for a while before killing them.}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj~hhubh)}(hXFor unbound workqueues, the number of backing pools is dynamic. Unbound workqueue can be assigned custom attributes using ``apply_workqueue_attrs()`` and workqueue will automatically create backing worker pools matching the attributes. The responsibility of regulating concurrency level is on the users. There is also a flag to mark a bound wq to ignore the concurrency management. Please refer to the API section for details.h](hzFor unbound workqueues, the number of backing pools is dynamic. Unbound workqueue can be assigned custom attributes using }(hjKhhhNhNubj)}(h``apply_workqueue_attrs()``h]happly_workqueue_attrs()}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubhX and workqueue will automatically create backing worker pools matching the attributes. The responsibility of regulating concurrency level is on the users. There is also a flag to mark a bound wq to ignore the concurrency management. Please refer to the API section for details.}(hjKhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj~hhubh)}(hXForward progress guarantee relies on that workers can be created when more execution contexts are necessary, which in turn is guaranteed through the use of rescue workers. All work items which might be used on code paths that handle memory reclaim are required to be queued on wq's that have a rescue-worker reserved for execution under memory pressure. Else it is possible that the worker-pool deadlocks waiting for execution contexts to free up.h]hXForward progress guarantee relies on that workers can be created when more execution contexts are necessary, which in turn is guaranteed through the use of rescue workers. All work items which might be used on code paths that handle memory reclaim are required to be queued on wq’s that have a rescue-worker reserved for execution under memory pressure. Else it is possible that the worker-pool deadlocks waiting for execution contexts to free up.}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj~hhubeh}(h] the-designah ]h"] the designah$]h&]uh1hhhhhhhhKEubh)}(hhh](h)}(h'Application Programming Interface (API)h]h'Application Programming Interface (API)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hX;``alloc_workqueue()`` allocates a wq. The original ``create_*workqueue()`` functions are deprecated and scheduled for removal. ``alloc_workqueue()`` takes three arguments - ``@name``, ``@flags`` and ``@max_active``. ``@name`` is the name of the wq and also used as the name of the rescuer thread if there is one.h](j)}(h``alloc_workqueue()``h]halloc_workqueue()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh allocates a wq. The original }(hjhhhNhNubj)}(h``create_*workqueue()``h]hcreate_*workqueue()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh6 functions are deprecated and scheduled for removal. }(hjhhhNhNubj)}(h``alloc_workqueue()``h]halloc_workqueue()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh takes three arguments - }(hjhhhNhNubj)}(h ``@name``h]h@name}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h ``@flags``h]h@flags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h``@max_active``h]h @max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. }(hjhhhNhNubj)}(h ``@name``h]h@name}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhW is the name of the wq and also used as the name of the rescuer thread if there is one.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hA wq no longer manages execution resources but serves as a domain for forward progress guarantee, flush and work item attributes. ``@flags`` and ``@max_active`` control how work items are assigned execution resources, scheduled and executed.h](hA wq no longer manages execution resources but serves as a domain for forward progress guarantee, flush and work item attributes. }(hjhhhNhNubj)}(h ``@flags``h]h@flags}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h``@max_active``h]h @max_active}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhQ control how work items are assigned execution resources, scheduled and executed.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hhh](h)}(h ``flags``h]j)}(hjQh]hflags}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1hhjLhhhhhKubhdefinition_list)}(hhh](hdefinition_list_item)}(hX``WQ_BH`` BH workqueues can be considered a convenience interface to softirq. BH workqueues are always per-CPU and all BH work items are executed in the queueing CPU's softirq context in the queueing order. All BH workqueues must have 0 ``max_active`` and ``WQ_HIGHPRI`` is the only allowed additional flag. BH work items cannot sleep. All other features such as delayed queueing, flushing and canceling are supported. h](hterm)}(h ``WQ_BH``h]j)}(hjuh]hWQ_BH}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubah}(h]h ]h"]h$]h&]uh1jqhhhKhjmubh definition)}(hhh](h)}(hBH workqueues can be considered a convenience interface to softirq. BH workqueues are always per-CPU and all BH work items are executed in the queueing CPU's softirq context in the queueing order.h]hBH workqueues can be considered a convenience interface to softirq. BH workqueues are always per-CPU and all BH work items are executed in the queueing CPU’s softirq context in the queueing order.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh)}(hdAll BH workqueues must have 0 ``max_active`` and ``WQ_HIGHPRI`` is the only allowed additional flag.h](hAll BH workqueues must have 0 }(hjhhhNhNubj)}(h``max_active``h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h``WQ_HIGHPRI``h]h WQ_HIGHPRI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh% is the only allowed additional flag.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubh)}(hnBH work items cannot sleep. All other features such as delayed queueing, flushing and canceling are supported.h]hnBH work items cannot sleep. All other features such as delayed queueing, flushing and canceling are supported.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhjmubeh}(h]h ]h"]h$]h&]uh1jkhhhKhjhubjl)}(h``WQ_PERCPU`` Work items queued to a per-cpu wq are bound to a specific CPU. This flag is the right choice when cpu locality is important. This flag is the complement of ``WQ_UNBOUND``. h](jr)}(h ``WQ_PERCPU``h]j)}(hjh]h WQ_PERCPU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhhhKhjubj)}(hhh](h)}(h|Work items queued to a per-cpu wq are bound to a specific CPU. This flag is the right choice when cpu locality is important.h]h|Work items queued to a per-cpu wq are bound to a specific CPU. This flag is the right choice when cpu locality is important.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh)}(h.This flag is the complement of ``WQ_UNBOUND``.h](hThis flag is the complement of }(hjhhhNhNubj)}(h``WQ_UNBOUND``h]h WQ_UNBOUND}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhhhKhjhhhubjl)}(hX``WQ_UNBOUND`` Work items queued to an unbound wq are served by the special worker-pools which host workers which are not bound to any specific CPU. This makes the wq behave as a simple execution context provider without concurrency management. The unbound worker-pools try to start execution of work items as soon as possible. Unbound wq sacrifices locality but is useful for the following cases. * Wide fluctuation in the concurrency level requirement is expected and using bound wq may end up creating large number of mostly unused workers across different CPUs as the issuer hops through different CPUs. * Long running CPU intensive workloads which can be better managed by the system scheduler. h](jr)}(h``WQ_UNBOUND``h]j)}(hjGh]h WQ_UNBOUND}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&]uh1jqhhhKhjAubj)}(hhh](h)}(hXWork items queued to an unbound wq are served by the special worker-pools which host workers which are not bound to any specific CPU. This makes the wq behave as a simple execution context provider without concurrency management. The unbound worker-pools try to start execution of work items as soon as possible. Unbound wq sacrifices locality but is useful for the following cases.h]hXWork items queued to an unbound wq are served by the special worker-pools which host workers which are not bound to any specific CPU. This makes the wq behave as a simple execution context provider without concurrency management. The unbound worker-pools try to start execution of work items as soon as possible. Unbound wq sacrifices locality but is useful for the following cases.}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj\ubj )}(hhh](j%)}(hWide fluctuation in the concurrency level requirement is expected and using bound wq may end up creating large number of mostly unused workers across different CPUs as the issuer hops through different CPUs. h]h)}(hWide fluctuation in the concurrency level requirement is expected and using bound wq may end up creating large number of mostly unused workers across different CPUs as the issuer hops through different CPUs.h]hWide fluctuation in the concurrency level requirement is expected and using bound wq may end up creating large number of mostly unused workers across different CPUs as the issuer hops through different CPUs.}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjpubah}(h]h ]h"]h$]h&]uh1j$hjmubj%)}(hZLong running CPU intensive workloads which can be better managed by the system scheduler. h]h)}(hYLong running CPU intensive workloads which can be better managed by the system scheduler.h]hYLong running CPU intensive workloads which can be better managed by the system scheduler.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1j$hjmubeh}(h]h ]h"]h$]h&]jtjuuh1jhhhKhj\ubeh}(h]h ]h"]h$]h&]uh1jhjAubeh}(h]h ]h"]h$]h&]uh1jkhhhKhjhhhubjl)}(h``WQ_FREEZABLE`` A freezable wq participates in the freeze phase of the system suspend operations. Work items on the wq are drained and no new work item starts execution until thawed. h](jr)}(h``WQ_FREEZABLE``h]j)}(hjh]h WQ_FREEZABLE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhhhKhjubj)}(hhh]h)}(hA freezable wq participates in the freeze phase of the system suspend operations. Work items on the wq are drained and no new work item starts execution until thawed.h]hA freezable wq participates in the freeze phase of the system suspend operations. Work items on the wq are drained and no new work item starts execution until thawed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhhhKhjhhhubjl)}(h``WQ_MEM_RECLAIM`` All wq which might be used in the memory reclaim paths **MUST** have this flag set. The wq is guaranteed to have at least one execution context regardless of memory pressure. h](jr)}(h``WQ_MEM_RECLAIM``h]j)}(hjh]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhhhKhjubj)}(hhh]h)}(hAll wq which might be used in the memory reclaim paths **MUST** have this flag set. The wq is guaranteed to have at least one execution context regardless of memory pressure.h](h7All wq which might be used in the memory reclaim paths }(hjhhhNhNubhstrong)}(h**MUST**h]hMUST}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhp have this flag set. The wq is guaranteed to have at least one execution context regardless of memory pressure.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhhhKhjhhhubjl)}(hXa``WQ_HIGHPRI`` Work items of a highpri wq are queued to the highpri worker-pool of the target cpu. Highpri worker-pools are served by worker threads with elevated nice level. Note that normal and highpri worker-pools don't interact with each other. Each maintains its separate pool of workers and implements concurrency management among its workers. h](jr)}(h``WQ_HIGHPRI``h]j)}(hj<h]h WQ_HIGHPRI}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubah}(h]h ]h"]h$]h&]uh1jqhhhKhj6ubj)}(hhh](h)}(hWork items of a highpri wq are queued to the highpri worker-pool of the target cpu. Highpri worker-pools are served by worker threads with elevated nice level.h]hWork items of a highpri wq are queued to the highpri worker-pool of the target cpu. Highpri worker-pools are served by worker threads with elevated nice level.}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjQubh)}(hNote that normal and highpri worker-pools don't interact with each other. Each maintains its separate pool of workers and implements concurrency management among its workers.h]hNote that normal and highpri worker-pools don’t interact with each other. Each maintains its separate pool of workers and implements concurrency management among its workers.}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjQubeh}(h]h ]h"]h$]h&]uh1jhj6ubeh}(h]h ]h"]h$]h&]uh1jkhhhKhjhhhubjl)}(hX``WQ_CPU_INTENSIVE`` Work items of a CPU intensive wq do not contribute to the concurrency level. In other words, runnable CPU intensive work items will not prevent other work items in the same worker-pool from starting execution. This is useful for bound work items which are expected to hog CPU cycles so that their execution is regulated by the system scheduler. Although CPU intensive work items don't contribute to the concurrency level, start of their executions is still regulated by the concurrency management and runnable non-CPU-intensive work items can delay execution of CPU intensive work items. This flag is meaningless for unbound wq. h](jr)}(h``WQ_CPU_INTENSIVE``h]j)}(hjh]hWQ_CPU_INTENSIVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhhhKhj|ubj)}(hhh](h)}(hXZWork items of a CPU intensive wq do not contribute to the concurrency level. In other words, runnable CPU intensive work items will not prevent other work items in the same worker-pool from starting execution. This is useful for bound work items which are expected to hog CPU cycles so that their execution is regulated by the system scheduler.h]hXZWork items of a CPU intensive wq do not contribute to the concurrency level. In other words, runnable CPU intensive work items will not prevent other work items in the same worker-pool from starting execution. This is useful for bound work items which are expected to hog CPU cycles so that their execution is regulated by the system scheduler.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh)}(hAlthough CPU intensive work items don't contribute to the concurrency level, start of their executions is still regulated by the concurrency management and runnable non-CPU-intensive work items can delay execution of CPU intensive work items.h]hAlthough CPU intensive work items don’t contribute to the concurrency level, start of their executions is still regulated by the concurrency management and runnable non-CPU-intensive work items can delay execution of CPU intensive work items.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh)}(h(This flag is meaningless for unbound wq.h]h(This flag is meaningless for unbound wq.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhj|ubeh}(h]h ]h"]h$]h&]uh1jkhhhKhjhhhubeh}(h]h ]h"]h$]h&]uh1jfhjLhhhhhNubeh}(h]flagsah ]h"]flagsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(h``max_active``h]j)}(hjh]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hX1``@max_active`` determines the maximum number of execution contexts per CPU which can be assigned to the work items of a wq. For example, with ``@max_active`` of 16, at most 16 work items of the wq can be executing at the same time per CPU. This is always a per-CPU attribute, even for unbound workqueues.h](j)}(h``@max_active``h]h @max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh determines the maximum number of execution contexts per CPU which can be assigned to the work items of a wq. For example, with }(hjhhhNhNubj)}(h``@max_active``h]h @max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh of 16, at most 16 work items of the wq can be executing at the same time per CPU. This is always a per-CPU attribute, even for unbound workqueues.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hThe maximum limit for ``@max_active`` is 2048 and the default value used when 0 is specified is 1024. These values are chosen sufficiently high such that they are not the limiting factor while providing protection in runaway cases.h](hThe maximum limit for }(hj&hhhNhNubj)}(h``@max_active``h]h @max_active}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubh is 2048 and the default value used when 0 is specified is 1024. These values are chosen sufficiently high such that they are not the limiting factor while providing protection in runaway cases.}(hj&hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXThe number of active work items of a wq is usually regulated by the users of the wq, more specifically, by how many work items the users may queue at the same time. Unless there is a specific need for throttling the number of active work items, specifying '0' is recommended.h]hXThe number of active work items of a wq is usually regulated by the users of the wq, more specifically, by how many work items the users may queue at the same time. Unless there is a specific need for throttling the number of active work items, specifying ‘0’ is recommended.}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hX=Some users depend on strict execution ordering where only one work item is in flight at any given time and the work items are processed in queueing order. While the combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` used to achieve this behavior, this is no longer the case. Use alloc_ordered_workqueue() instead.h](hSome users depend on strict execution ordering where only one work item is in flight at any given time and the work items are processed in queueing order. While the combination of }(hjThhhNhNubj)}(h``@max_active``h]h @max_active}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubh of 1 and }(hjThhhNhNubj)}(h``WQ_UNBOUND``h]h WQ_UNBOUND}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubhb used to achieve this behavior, this is no longer the case. Use alloc_ordered_workqueue() instead.}(hjThhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM hjhhubeh}(h] max-activeah ]h"] max_activeah$]h&]uh1hhjhhhhhKubeh}(h]%application-programming-interface-apiah ]h"]'application programming interface (api)ah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hExample Execution Scenariosh]hExample Execution Scenarios}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hkThe following example execution scenarios try to illustrate how cmwq behave under different configurations.h]hkThe following example execution scenarios try to illustrate how cmwq behave under different configurations.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh block_quote)}(hWork items w0, w1, w2 are queued to a bound wq q0 on the same CPU. w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms again before finishing. w1 and w2 burn CPU for 5ms then sleep for 10ms. h]h)}(hWork items w0, w1, w2 are queued to a bound wq q0 on the same CPU. w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms again before finishing. w1 and w2 burn CPU for 5ms then sleep for 10ms.h]hWork items w0, w1, w2 are queued to a bound wq q0 on the same CPU. w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms again before finishing. w1 and w2 burn CPU for 5ms then sleep for 10ms.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhhhMhjhhubh)}(hIgnoring all other tasks, works and processing overhead, and assuming simple FIFO scheduling, the following is one highly simplified version of possible sequences of events with the original wq. ::h]hIgnoring all other tasks, works and processing overhead, and assuming simple FIFO scheduling, the following is one highly simplified version of possible sequences of events with the original wq.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh literal_block)}(hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 starts and burns CPU 25 w1 sleeps 35 w1 wakes up and finishes 35 w2 starts and burns CPU 40 w2 sleeps 50 w2 wakes up and finishesh]hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 starts and burns CPU 25 w1 sleeps 35 w1 wakes up and finishes 35 w2 starts and burns CPU 40 w2 sleeps 50 w2 wakes up and finishes}hjsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1jhhhMhjhhubh)}(h+And with cmwq with ``@max_active`` >= 3, ::h](hAnd with cmwq with }(hjhhhNhNubj)}(h``@max_active``h]h @max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh >= 3,}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM+hjhhubj)}(hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 10 w2 starts and burns CPU 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishesh]hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 10 w2 starts and burns CPU 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishes}hjsbah}(h]h ]h"]h$]h&]jjuh1jhhhM-hjhhubh)}(hIf ``@max_active`` == 2, ::h](hIf }(hjhhhNhNubj)}(h``@max_active``h]h @max_active}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh == 2,}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM9hjhhubj)}(hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 20 w2 starts and burns CPU 25 w2 sleeps 35 w2 wakes up and finishesh]hXhTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 starts and burns CPU 10 w1 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 20 w2 starts and burns CPU 25 w2 sleeps 35 w2 wakes up and finishes}hj=sbah}(h]h ]h"]h$]h&]jjuh1jhhhM;hjhhubh)}(hbNow, let's assume w1 and w2 are queued to a different wq q1 which has ``WQ_CPU_INTENSIVE`` set, ::h](hHNow, let’s assume w1 and w2 are queued to a different wq q1 which has }(hjKhhhNhNubj)}(h``WQ_CPU_INTENSIVE``h]hWQ_CPU_INTENSIVE}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubh set,}(hjKhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMGhjhhubj)}(hXFTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 and w2 start and burn CPU 10 w1 sleeps 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishesh]hXFTIME IN MSECS EVENT 0 w0 starts and burns CPU 5 w0 sleeps 5 w1 and w2 start and burn CPU 10 w1 sleeps 15 w2 sleeps 15 w0 wakes up and burns CPU 20 w0 finishes 20 w1 wakes up and finishes 25 w2 wakes up and finishes}hjksbah}(h]h ]h"]h$]h&]jjuh1jhhhMJhjhhubeh}(h]example-execution-scenariosah ]h"]example execution scenariosah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h Guidelinesh]h Guidelines}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMWubj )}(hhh](j%)}(hXMDo not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work items which are used during memory reclaim. Each wq with ``WQ_MEM_RECLAIM`` set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with ``WQ_MEM_RECLAIM``. h]h)}(hXLDo not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work items which are used during memory reclaim. Each wq with ``WQ_MEM_RECLAIM`` set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with ``WQ_MEM_RECLAIM``.h](hDo not forget to use }(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhT if a wq may process work items which are used during memory reclaim. Each wq with }(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh set has an execution context reserved for it. If there is dependency among multiple work items used during memory reclaim, they should be queued to separate wq each with }(hjhhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMYhjubah}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubj%)}(hCUnless strict ordering is required, there is no need to use ST wq. h]h)}(hBUnless strict ordering is required, there is no need to use ST wq.h]hBUnless strict ordering is required, there is no need to use ST wq.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM`hjubah}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubj%)}(hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit. h]h)}(hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit.h]hUnless there is a specific need, using 0 for @max_active is recommended. In most use cases, concurrency level usually stays well under the default limit.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMbhjubah}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubj%)}(hXA wq serves as a domain for forward progress guarantee (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items which are not involved in memory reclaim and don't need to be flushed as a part of a group of work items, and don't require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq. Note: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq. h](h)}(hXA wq serves as a domain for forward progress guarantee (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items which are not involved in memory reclaim and don't need to be flushed as a part of a group of work items, and don't require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq.h](h8A wq serves as a domain for forward progress guarantee (}(hj hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhXE, flush and work item attributes. Work items which are not involved in memory reclaim and don’t need to be flushed as a part of a group of work items, and don’t require any special attribute, can use one of the system wq. There is no difference in execution characteristics between using a dedicated wq and a system wq.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMfhj ubh)}(hNote: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq.h]hNote: If something may generate more than @max_active outstanding work items (do stress test your producers), it may saturate a system wq and potentially lead to deadlock. It should utilize its own dedicated workqueue rather than the system wq.}(hj7 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMnhj ubeh}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubj%)}(hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution. h]h)}(hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution.h]hUnless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution.}(hjO hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMshjK ubah}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubeh}(h]h ]h"]h$]h&]jtjuuh1jhhhMYhjhhubeh}(h] guidelinesah ]h"] guidelinesah$]h&]uh1hhhhhhhhMWubh)}(hhh](h)}(hAffinity Scopesh]hAffinity Scopes}(hjt hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjq hhhhhMyubh)}(hXAn unbound workqueue groups CPUs according to its affinity scope to improve cache locality. For example, if a workqueue is using the default affinity scope of "cache_shard", it will group CPUs into sub-LLC shards. A work item queued on the workqueue will be assigned to a worker on one of the CPUs within the same shard as the issuing CPU. Once started, the worker may or may not be allowed to move outside the scope depending on the ``affinity_strict`` setting of the scope.h](hXAn unbound workqueue groups CPUs according to its affinity scope to improve cache locality. For example, if a workqueue is using the default affinity scope of “cache_shard”, it will group CPUs into sub-LLC shards. A work item queued on the workqueue will be assigned to a worker on one of the CPUs within the same shard as the issuing CPU. Once started, the worker may or may not be allowed to move outside the scope depending on the }(hj hhhNhNubj)}(h``affinity_strict``h]haffinity_strict}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh setting of the scope.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM{hjq hhubh)}(h;Workqueue currently supports the following affinity scopes.h]h;Workqueue currently supports the following affinity scopes.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjq hhubjg)}(hhh](jl)}(h``default`` Use the scope in module parameter ``workqueue.default_affinity_scope`` which is always set to one of the scopes below. h](jr)}(h ``default``h]j)}(hj h]hdefault}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhhhMhj ubj)}(hhh]h)}(hvUse the scope in module parameter ``workqueue.default_affinity_scope`` which is always set to one of the scopes below.h](h"Use the scope in module parameter }(hj hhhNhNubj)}(h$``workqueue.default_affinity_scope``h]h workqueue.default_affinity_scope}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh0 which is always set to one of the scopes below.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj ubjl)}(h``cpu`` CPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management. h](jr)}(h``cpu``h]j)}(hj h]hcpu}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhhhMhj ubj)}(hhh]h)}(hCPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management.h]hCPUs are not grouped. A work item issued on one CPU is processed by a worker on the same CPU. This makes unbound workqueues behave as per-cpu workqueues without concurrency management.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj hhubjl)}(h``smt`` CPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together. h](jr)}(h``smt``h]j)}(hj; h]hsmt}(hj= hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9 ubah}(h]h ]h"]h$]h&]uh1jqhhhMhj5 ubj)}(hhh]h)}(hCPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together.h]hCPUs are grouped according to SMT boundaries. This usually means that the logical threads of each physical CPU core are grouped together.}(hjS hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjP ubah}(h]h ]h"]h$]h&]uh1jhj5 ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj hhubjl)}(h``cache`` CPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases. h](jr)}(h ``cache``h]j)}(hjs h]hcache}(hju hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjq ubah}(h]h ]h"]h$]h&]uh1jqhhhMhjm ubj)}(hhh]h)}(hCPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases.h]hCPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of cases.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhjm ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj hhubjl)}(hX``cache_shard`` CPUs are grouped into sub-LLC shards of at most ``wq_cache_shard_size`` cores (default 8, tunable via the ``workqueue.cache_shard_size`` boot parameter). Shards are always split on core (SMT group) boundaries. This is the default affinity scope. h](jr)}(h``cache_shard``h]j)}(hj h]h cache_shard}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhhhMhj ubj)}(hhh]h)}(hCPUs are grouped into sub-LLC shards of at most ``wq_cache_shard_size`` cores (default 8, tunable via the ``workqueue.cache_shard_size`` boot parameter). Shards are always split on core (SMT group) boundaries. This is the default affinity scope.h](h0CPUs are grouped into sub-LLC shards of at most }(hj hhhNhNubj)}(h``wq_cache_shard_size``h]hwq_cache_shard_size}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh# cores (default 8, tunable via the }(hj hhhNhNubj)}(h``workqueue.cache_shard_size``h]hworkqueue.cache_shard_size}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhm boot parameter). Shards are always split on core (SMT group) boundaries. This is the default affinity scope.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj hhubjl)}(h8``numa`` CPUs are grouped according to NUMA boundaries. h](jr)}(h``numa``h]j)}(hj h]hnuma}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhhhMhj ubj)}(hhh]h)}(h.CPUs are grouped according to NUMA boundaries.h]h.CPUs are grouped according to NUMA boundaries.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj hhubjl)}(h``system`` All CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU. h](jr)}(h ``system``h]j)}(hj? h]hsystem}(hjA hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj= ubah}(h]h ]h"]h$]h&]uh1jqhhhMhj9 ubj)}(hhh]h)}(hwAll CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU.h]hwAll CPUs are put in the same group. Workqueue makes no effort to process a work item on a CPU close to the issuing CPU.}(hjW hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjT ubah}(h]h ]h"]h$]h&]uh1jhj9 ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj hhubeh}(h]h ]h"]h$]h&]uh1jfhjq hhhhhNubh)}(hThe default affinity scope can be changed with the module parameter ``workqueue.default_affinity_scope`` and a specific workqueue's affinity scope can be changed using ``apply_workqueue_attrs()``.h](hDThe default affinity scope can be changed with the module parameter }(hjw hhhNhNubj)}(h$``workqueue.default_affinity_scope``h]h workqueue.default_affinity_scope}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjw ubhB and a specific workqueue’s affinity scope can be changed using }(hjw hhhNhNubj)}(h``apply_workqueue_attrs()``h]happly_workqueue_attrs()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjw ubh.}(hjw hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjq hhubh)}(hIf ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope related interface files under its ``/sys/devices/virtual/workqueue/WQ_NAME/`` directory.h](hIf }(hj hhhNhNubj)}(h ``WQ_SYSFS``h]hWQ_SYSFS}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh` is set, the workqueue will have the following affinity scope related interface files under its }(hj hhhNhNubj)}(h+``/sys/devices/virtual/workqueue/WQ_NAME/``h]h'/sys/devices/virtual/workqueue/WQ_NAME/}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh directory.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjq hhubjg)}(hhh](jl)}(h``affinity_scope`` Read to see the current affinity scope. Write to change. When default is the current scope, reading this file will also show the current effective scope in parentheses, for example, ``default (cache)``. h](jr)}(h``affinity_scope``h]j)}(hj h]haffinity_scope}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhhhMhj ubj)}(hhh](h)}(h8Read to see the current affinity scope. Write to change.h]h8Read to see the current affinity scope. Write to change.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubh)}(hWhen default is the current scope, reading this file will also show the current effective scope in parentheses, for example, ``default (cache)``.h](h}When default is the current scope, reading this file will also show the current effective scope in parentheses, for example, }(hj hhhNhNubj)}(h``default (cache)``h]hdefault (cache)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj ubeh}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj ubjl)}(hX``affinity_strict`` 0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available. If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels. h](jr)}(h``affinity_strict``h]j)}(hj< h]haffinity_strict}(hj> hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj: ubah}(h]h ]h"]h$]h&]uh1jqhhhMhj6 ubj)}(hhh](h)}(hX0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available.h]hX0 by default indicating that affinity scopes are not strict. When a work item starts execution, workqueue makes a best-effort attempt to ensure that the worker is inside its affinity scope, which is called repatriation. Once started, the scheduler is free to move the worker anywhere in the system as it sees fit. This enables benefiting from scope locality while still being able to utilize other CPUs if necessary and available.}(hjT hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjQ ubh)}(hX/If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels.h]hX/If set to 1, all workers of the scope are guaranteed always to be in the scope. This may be useful when crossing affinity scopes has other implications, for example, in terms of power consumption or workload isolation. Strict NUMA scope can also be used to match the workqueue behavior of older kernels.}(hjb hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjQ ubeh}(h]h ]h"]h$]h&]uh1jhj6 ubeh}(h]h ]h"]h$]h&]uh1jkhhhMhj hhubeh}(h]h ]h"]h$]h&]uh1jfhjq hhhhhNubeh}(h]affinity-scopesah ]h"]affinity scopesah$]h&]uh1hhhhhhhhMyubh)}(hhh](h)}(hAffinity Scopes and Performanceh]hAffinity Scopes and Performance}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubh)}(hX%It'd be ideal if an unbound workqueue's behavior is optimal for vast majority of use cases without further tuning. Unfortunately, in the current kernel, there exists a pronounced trade-off between locality and utilization necessitating explicit configurations when workqueues are heavily used.h]hX)It’d be ideal if an unbound workqueue’s behavior is optimal for vast majority of use cases without further tuning. Unfortunately, in the current kernel, there exists a pronounced trade-off between locality and utilization necessitating explicit configurations when workqueues are heavily used.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hXcHigher locality leads to higher efficiency where more work is performed for the same number of consumed CPU cycles. However, higher locality may also cause lower overall system utilization if the work items are not spread enough across the affinity scopes by the issuers. The following performance testing with dm-crypt clearly illustrates this trade-off.h]hXcHigher locality leads to higher efficiency where more work is performed for the same number of consumed CPU cycles. However, higher locality may also cause lower overall system utilization if the work items are not spread enough across the affinity scopes by the issuers. The following performance testing with dm-crypt clearly illustrates this trade-off.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hXThe tests are run on a CPU with 12-cores/24-threads split across four L3 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency. ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and opened with ``cryptsetup`` with default settings.h](hThe tests are run on a CPU with 12-cores/24-threads split across four L3 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency. }(hj hhhNhNubj)}(h ``/dev/dm-0``h]h /dev/dm-0}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhL is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and opened with }(hj hhhNhNubj)}(h``cryptsetup``h]h cryptsetup}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh with default settings.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hhh](h)}(h=Scenario 1: Enough issuers and work spread across the machineh]h=Scenario 1: Enough issuers and work spread across the machine}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubh)}(hThe command used: ::h]hThe command used:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj)}(h$ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \ --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \ --name=iops-test-job --verify=sha512h]h$ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \ --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \ --name=iops-test-job --verify=sha512}hj sbah}(h]h ]h"]h$]h&]jjuh1jhhhMhj hhubh)}(hXThere are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512`` makes ``fio`` generate and read back the content each time which makes execution locality matter between the issuer and ``kcryptd``. The following are the read bandwidths and CPU utilizations depending on different affinity scope settings on ``kcryptd`` measured over five runs. Bandwidths are in MiBps, and CPU util in percents.h](h8There are 24 issuers, each issuing 64 IOs concurrently. }(hj hhhNhNubj)}(h``--verify=sha512``h]h--verify=sha512}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh makes }(hj hhhNhNubj)}(h``fio``h]hfio}(hj0 hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhk generate and read back the content each time which makes execution locality matter between the issuer and }(hj hhhNhNubj)}(h ``kcryptd``h]hkcryptd}(hjB hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubho. The following are the read bandwidths and CPU utilizations depending on different affinity scope settings on }(hj hhhNhNubj)}(h ``kcryptd``h]hkcryptd}(hjT hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhL measured over five runs. Bandwidths are in MiBps, and CPU util in percents.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj hhubhtable)}(hhh]htgroup)}(hhh](hcolspec)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jv hjs ubjw )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1jv hjs ubjw )}(hhh]h}(h]h ]h"]h$]h&]j Kuh1jv hjs ubhthead)}(hhh]hrow)}(hhh](hentry)}(hhh]h)}(hAffinityh]hAffinity}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(hBandwidth (MiBps)h]hBandwidth (MiBps)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(h CPU util (%)h]h CPU util (%)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]uh1j hj ubah}(h]h ]h"]h$]h&]uh1j hjs ubhtbody)}(hhh](j )}(hhh](j )}(hhh]h)}(hsystemh]hsystem}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(h1159.40 ±1.34h]h1159.40 ±1.34}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(h 99.31 ±0.02h]h 99.31 ±0.02}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj'ubah}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh](j )}(hhh]h)}(hcacheh]hcache}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjGubah}(h]h ]h"]h$]h&]uh1j hjDubj )}(hhh]h)}(h1166.40 ±0.89h]h1166.40 ±0.89}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj^ubah}(h]h ]h"]h$]h&]uh1j hjDubj )}(hhh]h)}(h 99.34 ±0.01h]h 99.34 ±0.01}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjuubah}(h]h ]h"]h$]h&]uh1j hjDubeh}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh](j )}(hhh]h)}(hcache (strict)h]hcache (strict)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h1166.00 ±0.71h]h1166.00 ±0.71}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h 99.35 ±0.01h]h 99.35 ±0.01}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]uh1j hjs ubeh}(h]h ]h"]h$]h&]colsKuh1jq hjn ubah}(h]h ]colwidths-givenah"]h$]h&]uh1jl hj hhhNhNubh)}(hWith enough issuers spread across the system, there is no downside to "cache", strict or otherwise. All three configurations saturate the whole machine but the cache-affine ones outperform by 0.6% thanks to improved locality.h]hWith enough issuers spread across the system, there is no downside to “cache”, strict or otherwise. All three configurations saturate the whole machine but the cache-affine ones outperform by 0.6% thanks to improved locality.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubeh}(h]hjubah}(h]h ]h"]h$]h&]uh1j hjdubeh}(h]h ]h"]h$]h&]uh1j hjaubah}(h]h ]h"]h$]h&]uh1j hjCubj )}(hhh](j )}(hhh](j )}(hhh]h)}(hsystemh]hsystem}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM@hjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h 993.60 ±1.82h]h 993.60 ±1.82}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh]h)}(h 75.49 ±0.06h]h 75.49 ±0.06}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMBhjubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh](j )}(hhh]h)}(hcacheh]hcache}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMDhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(h 973.40 ±1.52h]h 973.40 ±1.52}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhj#ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hhh]h)}(h 74.90 ±0.07h]h 74.90 ±0.07}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhj:ubah}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]uh1j hjubj )}(hhh](j )}(hhh]h)}(hcache (strict)h]hcache (strict)}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhjZubah}(h]h ]h"]h$]h&]uh1j hjWubj )}(hhh]h)}(h 828.20 ±4.49h]h 828.20 ±4.49}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhjqubah}(h]h ]h"]h$]h&]uh1j hjWubj )}(hhh]h)}(h 66.84 ±0.29h]h 66.84 ±0.29}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjubah}(h]h ]h"]h$]h&]uh1j hjWubeh}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1j hjCubeh}(h]h ]h"]h$]h&]colsKuh1jq hj@ubah}(h]h ]jah"]h$]h&]uh1jl hjhhhNhNubh)}(hNow, the tradeoff between locality and utilization is clearer. "cache" shows 2% bandwidth loss compared to "system" and "cache (struct)" whopping 20%.h]hNow, the tradeoff between locality and utilization is clearer. “cache” shows 2% bandwidth loss compared to “system” and “cache (struct)” whopping 20%.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjhhubeh}(h]9scenario-3-even-fewer-issuers-not-enough-work-to-saturateah ]h"];scenario 3: even fewer issuers, not enough work to saturateah$]h&]uh1hhj hhhhhM,ubh)}(hhh](h)}(hConclusion and Recommendationsh]hConclusion and Recommendations}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMQubh)}(hXIn the above experiments, the efficiency advantage of the "cache" affinity scope over "system" is, while consistent and noticeable, small. However, the impact is dependent on the distances between the scopes and may be more pronounced in processors with more complex topologies.h]hXIn the above experiments, the efficiency advantage of the “cache” affinity scope over “system” is, while consistent and noticeable, small. However, the impact is dependent on the distances between the scopes and may be more pronounced in processors with more complex topologies.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMShjhhubh)}(hWhile the loss of work-conservation in certain scenarios hurts, it is a lot better than "cache (strict)" and maximizing workqueue utilization is unlikely to be the common case anyway. As such, "cache" is the default affinity scope for unbound pools.h]hXWhile the loss of work-conservation in certain scenarios hurts, it is a lot better than “cache (strict)” and maximizing workqueue utilization is unlikely to be the common case anyway. As such, “cache” is the default affinity scope for unbound pools.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMXhjhhubj )}(hhh](j%)}(hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using ``apply_workqueue_attrs()`` and/or enable ``WQ_SYSFS``. h]h)}(hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using ``apply_workqueue_attrs()`` and/or enable ``WQ_SYSFS``.h](hAs there is no one option which is great for most cases, workqueue usages that may consume a significant amount of CPU are recommended to configure the workqueues using }(hjhhhNhNubj)}(h``apply_workqueue_attrs()``h]happly_workqueue_attrs()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and/or enable }(hjhhhNhNubj)}(h ``WQ_SYSFS``h]hWQ_SYSFS}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM]hjubah}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubj%)}(hAn unbound workqueue with strict "cpu" affinity scope behaves the same as ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility. h]h)}(hAn unbound workqueue with strict "cpu" affinity scope behaves the same as ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility.h](hNAn unbound workqueue with strict “cpu” affinity scope behaves the same as }(hj>hhhNhNubj)}(h``WQ_CPU_INTENSIVE``h]hWQ_CPU_INTENSIVE}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubhu per-cpu workqueue. There is no real advanage to the latter and an unbound workqueue provides a lot more flexibility.}(hj>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMbhj:ubah}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubj%)}(hrAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict "numa" affinity scope. h]h)}(hqAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict "numa" affinity scope.h]huAffinity scopes are introduced in Linux v6.5. To emulate the previous behavior, use strict “numa” affinity scope.}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMfhjdubah}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubj%)}(hXRThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn't be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary. h]h)}(hXPThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn't be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary.h]hXRThe loss of work-conservation in non-strict affinity scopes is likely originating from the scheduler. There is no theoretical reason why the kernel wouldn’t be able to do the right thing and maintain work-conservation in most cases. As such, it is possible that future scheduler improvements may make most of these tunables unnecessary.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMihj|ubah}(h]h ]h"]h$]h&]uh1j$hjhhhhhNubeh}(h]h ]h"]h$]h&]jtjuuh1jhhhM]hjhhubeh}(h]conclusion-and-recommendationsah ]h"]conclusion and recommendationsah$]h&]uh1hhj hhhhhMQubeh}(h]affinity-scopes-and-performanceah ]h"]affinity scopes and performanceah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(hExamining Configurationh]hExamining Configuration}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMqubh)}(hUse tools/workqueue/wq_dump.py to examine unbound CPU affinity configuration, worker pools and how workqueues map to the pools: ::h]hUse tools/workqueue/wq_dump.py to examine unbound CPU affinity configuration, worker pools and how workqueues map to the pools:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMshjhhubj)}(hXb$ tools/workqueue/wq_dump.py Affinity Scopes =============== wq_unbound_cpumask=0000000f CPU nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 SMT nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 CACHE (default) nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 NUMA nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 SYSTEM nr_pods 1 pod_cpus [0]=0000000f pod_node [0]=-1 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0 Worker Pools ============ pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003 pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c Workqueue CPU -> pool ===================== [ workqueue \ CPU 0 1 2 3 dfl] events percpu 0 2 4 6 events_highpri percpu 1 3 5 7 events_long percpu 0 2 4 6 events_unbound unbound 9 9 10 10 8 events_freezable percpu 0 2 4 6 events_power_efficient percpu 0 2 4 6 events_freezable_pwr_ef percpu 0 2 4 6 rcu_gp percpu 0 2 4 6 rcu_par_gp percpu 0 2 4 6 slub_flushwq percpu 0 2 4 6 netns ordered 8 8 8 8 8 ...h]hXb$ tools/workqueue/wq_dump.py Affinity Scopes =============== wq_unbound_cpumask=0000000f CPU nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 SMT nr_pods 4 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 pod_node [0]=0 [1]=0 [2]=1 [3]=1 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 CACHE (default) nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 NUMA nr_pods 2 pod_cpus [0]=00000003 [1]=0000000c pod_node [0]=0 [1]=1 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 SYSTEM nr_pods 1 pod_cpus [0]=0000000f pod_node [0]=-1 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0 Worker Pools ============ pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003 pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c Workqueue CPU -> pool ===================== [ workqueue \ CPU 0 1 2 3 dfl] events percpu 0 2 4 6 events_highpri percpu 1 3 5 7 events_long percpu 0 2 4 6 events_unbound unbound 9 9 10 10 8 events_freezable percpu 0 2 4 6 events_power_efficient percpu 0 2 4 6 events_freezable_pwr_ef percpu 0 2 4 6 rcu_gp percpu 0 2 4 6 rcu_par_gp percpu 0 2 4 6 slub_flushwq percpu 0 2 4 6 netns ordered 8 8 8 8 8 ...}hjsbah}(h]h ]h"]h$]h&]jjuh1jhhhMvhjhhubh)}(h-See the command's help message for more info.h]h/See the command’s help message for more info.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]examining-configurationah ]h"]examining configurationah$]h&]uh1hhhhhhhhMqubh)}(hhh](h)}(h Monitoringh]h Monitoring}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hEUse tools/workqueue/wq_monitor.py to monitor workqueue operations: ::h]hBUse tools/workqueue/wq_monitor.py to monitor workqueue operations:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(hX$ tools/workqueue/wq_monitor.py events total infl CPUtime CPUhog CMW/RPR mayday rescued events 18545 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38306 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29598 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - total infl CPUtime CPUhog CMW/RPR mayday rescued events 18548 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38322 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29603 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - ...h]hX$ tools/workqueue/wq_monitor.py events total infl CPUtime CPUhog CMW/RPR mayday rescued events 18545 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38306 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29598 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - total infl CPUtime CPUhog CMW/RPR mayday rescued events 18548 0 6.1 0 5 - - events_highpri 8 0 0.0 0 0 - - events_long 3 0 0.0 0 0 - - events_unbound 38322 0 0.1 - 7 - - events_freezable 0 0 0.0 0 0 - - events_power_efficient 29603 0 0.2 0 0 - - events_freezable_pwr_ef 10 0 0.0 0 0 - - sock_diag_events 0 0 0.0 0 0 - - ...}hj sbah}(h]h ]h"]h$]h&]jjuh1jhhhMhjhhubh)}(h-See the command's help message for more info.h]h/See the command’s help message for more info.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h] monitoringah ]h"] monitoringah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h Debuggingh]h Debugging}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0hhhhhMubh)}(hBecause the work functions are executed by generic worker threads there are a few tricks needed to shed some light on misbehaving workqueue users.h]hBecause the work functions are executed by generic worker threads there are a few tricks needed to shed some light on misbehaving workqueue users.}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubh)}(h1Worker threads show up in the process list as: ::h]h.Worker threads show up in the process list as:}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubj)}(hX;root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0]h]hX;root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0]}hj]sbah}(h]h ]h"]h$]h&]jjuh1jhhhMhj0hhubh)}(h[If kworkers are going crazy (using too much cpu), there are two types of possible problems:h]h[If kworkers are going crazy (using too much cpu), there are two types of possible problems:}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubj)}(hh1. Something being scheduled in rapid succession 2. A single work item that consumes lots of cpu cycles h]henumerated_list)}(hhh](j%)}(h-Something being scheduled in rapid successionh]h)}(hjh]h-Something being scheduled in rapid succession}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(h4A single work item that consumes lots of cpu cycles h]h)}(h3A single work item that consumes lots of cpu cyclesh]h3A single work item that consumes lots of cpu cycles}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]enumtypearabicprefixhsuffix.uh1j}hjyubah}(h]h ]h"]h$]h&]uh1jhhhMhj0hhubh)}(h.The first one can be tracked using tracing: ::h]h+The first one can be tracked using tracing:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubj)}(h$ echo workqueue:workqueue_queue_work > /sys/kernel/tracing/set_event $ cat /sys/kernel/tracing/trace_pipe > out.txt (wait a few secs) ^Ch]h$ echo workqueue:workqueue_queue_work > /sys/kernel/tracing/set_event $ cat /sys/kernel/tracing/trace_pipe > out.txt (wait a few secs) ^C}hjsbah}(h]h ]h"]h$]h&]jjuh1jhhhMhj0hhubh)}(hIf something is busy looping on work queueing, it would be dominating the output and the offender can be determined with the work item function.h]hIf something is busy looping on work queueing, it would be dominating the output and the offender can be determined with the work item function.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubh)}(hvFor the second type of problems it should be possible to just check the stack trace of the offending worker thread. ::h]hsFor the second type of problems it should be possible to just check the stack trace of the offending worker thread.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubj)}(h'$ cat /proc/THE_OFFENDING_KWORKER/stackh]h'$ cat /proc/THE_OFFENDING_KWORKER/stack}hjsbah}(h]h ]h"]h$]h&]jjuh1jhhhMhj0hhubh)}(hHThe work item's function should be trivially visible in the stack trace.h]hJThe work item’s function should be trivially visible in the stack trace.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubeh}(h] debuggingah ]h"] debuggingah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(hNon-reentrance Conditionsh]hNon-reentrance Conditions}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hzWorkqueue guarantees that a work item cannot be re-entrant if the following conditions hold after a work item gets queued:h]hzWorkqueue guarantees that a work item cannot be re-entrant if the following conditions hold after a work item gets queued:}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(h1. The work function hasn't been changed. 2. No one queues the work item to another workqueue. 3. The work item hasn't been reinitiated. h]j~)}(hhh](j%)}(h&The work function hasn't been changed.h]h)}(hjFh]h(The work function hasn’t been changed.}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjDubah}(h]h ]h"]h$]h&]uh1j$hjAubj%)}(h1No one queues the work item to another workqueue.h]h)}(hj]h]h1No one queues the work item to another workqueue.}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj[ubah}(h]h ]h"]h$]h&]uh1j$hjAubj%)}(h'The work item hasn't been reinitiated. h]h)}(h&The work item hasn't been reinitiated.h]h(The work item hasn’t been reinitiated.}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjrubah}(h]h ]h"]h$]h&]uh1j$hjAubeh}(h]h ]h"]h$]h&]jjjhjjuh1j}hj=ubah}(h]h ]h"]h$]h&]uh1jhhhM hjhhubh)}(hIn other words, if the above conditions hold, the work item is guaranteed to be executed by at most one worker system-wide at any given time.h]hIn other words, if the above conditions hold, the work item is guaranteed to be executed by at most one worker system-wide at any given time.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hNote that requeuing the work item (to the same queue) in the self function doesn't break these conditions, so it's safe to do. Otherwise, caution is required when breaking the conditions inside a work function.h]hNote that requeuing the work item (to the same queue) in the self function doesn’t break these conditions, so it’s safe to do. Otherwise, caution is required when breaking the conditions inside a work function.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]non-reentrance-conditionsah ]h"]non-reentrance conditionsah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h&Kernel Inline Documentations Referenceh]h&Kernel Inline Documentations Reference}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubhindex)}(hhh]h}(h]h ]h"]h$]h&]entries](singleworkqueue_attrs (C struct)c.workqueue_attrshNtauh1jhjhhhNhNubhdesc)}(hhh](hdesc_signature)}(hworkqueue_attrsh]hdesc_signature_line)}(hstruct workqueue_attrsh](hdesc_sig_keyword)}(hstructh]hstruct}(hjhhhNhNubah}(h]h ]kah"]h$]h&]uh1jhjhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKubhdesc_sig_space)}(h h]h }(hjhhhNhNubah}(h]h ]wah"]h$]h&]uh1jhjhhhjhKubh desc_name)}(hworkqueue_attrsh]h desc_sig_name)}(hjh]hworkqueue_attrs}(hjhhhNhNubah}(h]h ]nah"]h$]h&]uh1jhjubah}(h]h ](sig-namedescnameeh"]h$]h&]jjuh1jhjhhhjhKubeh}(h]h ]h"]h$]h&]jj add_permalinkuh1jsphinx_line_type declaratorhjhhhjhKubah}(h]jah ](sig sig-objecteh"]h$]h&] is_multiline _toc_parts) _toc_namehuh1jhjhKhjhhubh desc_content)}(hhh]h)}(h"A struct for workqueue attributes.h]h"A struct for workqueue attributes.}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjDhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhKubeh}(h]h ](cstructeh"]h$]h&]domainj_objtypej`desctypej`noindex noindexentrynocontentsentryuh1jhhhjhNhNubh container)}(hX**Definition**:: struct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; }; **Members** ``nice`` nice level ``cpumask`` allowed CPUs Work items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**. ``__pod_cpumask`` internal attribute used to create per-pod pools Internal use only. Per-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**. ``affn_strict`` affinity scope is strict If clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside. If set, workers are only allowed to run inside **__pod_cpumask**. ``affn_scope`` unbound CPU affinity scope CPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node. ``ordered`` work items must be executed one by one in queueing orderh](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh:}(hjphhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjlubj)}(hstruct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; };h]hstruct workqueue_attrs { int nice; cpumask_var_t cpumask; cpumask_var_t __pod_cpumask; bool affn_strict; enum wq_affn_scope affn_scope; bool ordered; };}hjsbah}(h]h ]h"]h$]h&]jjuh1jh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjlubh)}(h **Members**h]j)}(hjh]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjlubjg)}(hhh](jl)}(h``nice`` nice level h](jr)}(h``nice``h]j)}(hjh]hnice}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjubj)}(hhh]h)}(h nice levelh]h nice level}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubjl)}(h``cpumask`` allowed CPUs Work items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**. h](jr)}(h ``cpumask``h]j)}(hjh]hcpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjubj)}(hhh](h)}(h allowed CPUsh]h allowed CPUs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhj ubh)}(hWork items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same **cpumask**.h](hWork items in this workqueue are affine to these CPUs and not allowed to execute on other CPUs. A pool serving a workqueue must have the same }(hjhhhNhNubj)}(h **cpumask**h]hcpumask}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhj ubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj hKhjubjl)}(hXh``__pod_cpumask`` internal attribute used to create per-pod pools Internal use only. Per-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**. h](jr)}(h``__pod_cpumask``h]j)}(hjQh]h __pod_cpumask}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjKubj)}(hhh](h)}(h/internal attribute used to create per-pod poolsh]h/internal attribute used to create per-pod pools}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjgubh)}(hInternal use only.h]hInternal use only.}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjgubh)}(hXPer-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint **__pod_cpumask**'s. Whether the enforcement of a pool's **__pod_cpumask** is strict depends on **affn_strict**.h](hPer-pod unbound worker pools are used to improve locality. Always a subset of ->cpumask. A workqueue can be associated with multiple worker pools with disjoint }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh,’s. Whether the enforcement of a pool’s }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is strict depends on }(hjhhhNhNubj)}(h**affn_strict**h]h affn_strict}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjgubeh}(h]h ]h"]h$]h&]uh1jhjKubeh}(h]h ]h"]h$]h&]uh1jkhjfhKhjubjl)}(hX``affn_strict`` affinity scope is strict If clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside. If set, workers are only allowed to run inside **__pod_cpumask**. h](jr)}(h``affn_strict``h]j)}(hjh]h affn_strict}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjubj)}(hhh](h)}(haffinity scope is stricth]haffinity scope is strict}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjubh)}(hIf clear, workqueue will make a best-effort attempt at starting the worker inside **__pod_cpumask** but the scheduler is free to migrate it outside.h](hRIf clear, workqueue will make a best-effort attempt at starting the worker inside }(hjhhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh1 but the scheduler is free to migrate it outside.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjubh)}(hAIf set, workers are only allowed to run inside **__pod_cpumask**.h](h/If set, workers are only allowed to run inside }(hj(hhhNhNubj)}(h**__pod_cpumask**h]h __pod_cpumask}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj(ubh.}(hj(hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhKhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubjl)}(hX``affn_scope`` unbound CPU affinity scope CPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node. h](jr)}(h``affn_scope``h]j)}(hjZh]h affn_scope}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjTubj)}(hhh](h)}(hunbound CPU affinity scopeh]hunbound CPU affinity scope}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjpubh)}(hXeCPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting ``WQ_AFFN_NUMA`` makes the workqueue use a separate worker pool for each NUMA node.h](hXCPU pods are used to improve execution locality of unbound work items. There are multiple pod types, one for each wq_affn_scope, and every CPU in the system belongs to one pod in every pod type. CPUs that belong to the same pod share the worker pool. For example, selecting }(hjhhhNhNubj)}(h``WQ_AFFN_NUMA``h]h WQ_AFFN_NUMA}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhC makes the workqueue use a separate worker pool for each NUMA node.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjpubeh}(h]h ]h"]h$]h&]uh1jhjTubeh}(h]h ]h"]h$]h&]uh1jkhjohKhjubjl)}(hD``ordered`` work items must be executed one by one in queueing orderh](jr)}(h ``ordered``h]j)}(hjh]hordered}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjubj)}(hhh]h)}(h8work items must be executed one by one in queueing orderh]h8work items must be executed one by one in queueing order}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubeh}(h]h ]h"]h$]h&]uh1jfhjlubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjhhubh)}(h>This can be used to change attributes of an unbound workqueue.h]h>This can be used to change attributes of an unbound workqueue.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhKhjhhubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwork_pending (C macro)c.work_pendinghNtauh1jhjhhhNhNubj)}(hhh](j)}(h work_pendingh]j)}(h work_pendingh]j)}(h work_pendingh]j)}(hj0h]h work_pending}(hj:hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj6ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj2hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMaubah}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj.hhhjMhMaubah}(h]j)ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjMhMahj+hhubjC)}(hhh]h}(h]h ]h"]h$]h&]uh1jBhj+hhhjMhMaubeh}(h]h ](j_macroeh"]h$]h&]jdj_jejfjfjfjgjhjiuh1jhhhjhNhNubh)}(h``work_pending (work)``h]j)}(hjlh]hwork_pending (work)}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMchjhhubj)}(h2Find out whether a work item is currently pending h]h)}(h1Find out whether a work item is currently pendingh]h1Find out whether a work item is currently pending}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMahjubah}(h]h ]h"]h$]h&]uh1jhjhMahjhhubjk)}(h4**Parameters** ``work`` The work item in questionh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMehjubjg)}(hhh]jl)}(h"``work`` The work item in questionh](jr)}(h``work``h]j)}(hjh]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMghjubj)}(hhh]h)}(hThe work item in questionh]hThe work item in question}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMbhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMghjubah}(h]h ]h"]h$]h&]uh1jfhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdelayed_work_pending (C macro)c.delayed_work_pendinghNtauh1jhjhhhNhNubj)}(hhh](j)}(hdelayed_work_pendingh]j)}(hdelayed_work_pendingh]j)}(hdelayed_work_pendingh]j)}(hjh]hdelayed_work_pending}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhubah}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhj1hMhubah}(h]j ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj1hMhhjhhubjC)}(hhh]h}(h]h ]h"]h$]h&]uh1jBhjhhhj1hMhubeh}(h]h ](j_macroeh"]h$]h&]jdj_jejJjfjJjgjhjiuh1jhhhjhNhNubh)}(h``delayed_work_pending (w)``h]j)}(hjPh]hdelayed_work_pending (w)}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMjhjhhubj)}(h limits the number of in-flight work items for each CPU. e.g. }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhW of 1 indicates that each CPU can be executing at most one work item for the workqueue.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjubh)}(hFor unbound workqueues, **max_active** limits the number of in-flight work items for the whole system. e.g. **max_active** of 16 indicates that there can be at most 16 work items executing for the workqueue in the whole system.h](hFor unbound workqueues, }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhF limits the number of in-flight work items for the whole system. e.g. }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhi of 16 indicates that there can be at most 16 work items executing for the workqueue in the whole system.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjubh)}(hAs sharing the same active counter for an unbound workqueue across multiple NUMA nodes can be expensive, **max_active** is distributed to each NUMA node according to the proportion of the number of online CPUs and enforced independently.h](hiAs sharing the same active counter for an unbound workqueue across multiple NUMA nodes can be expensive, }(hjIhhhNhNubj)}(h**max_active**h]h max_active}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIubhv is distributed to each NUMA node according to the proportion of the number of online CPUs and enforced independently.}(hjIhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjubh)}(hXDepending on online CPU distribution, a node may end up with per-node max_active which is significantly lower than **max_active**, which can lead to deadlocks if the per-node concurrency limit is lower than the maximum number of interdependent work items for the workqueue.h](hsDepending on online CPU distribution, a node may end up with per-node max_active which is significantly lower than }(hjjhhhNhNubj)}(h**max_active**h]h max_active}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubh, which can lead to deadlocks if the per-node concurrency limit is lower than the maximum number of interdependent work items for the workqueue.}(hjjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjubh)}(hX0To guarantee forward progress regardless of online CPU distribution, the concurrency limit on every node is guaranteed to be equal to or greater than min_active which is set to min(**max_active**, ``WQ_DFL_MIN_ACTIVE``). This means that the sum of per-node max_active's may be larger than **max_active**.h](hTo guarantee forward progress regardless of online CPU distribution, the concurrency limit on every node is guaranteed to be equal to or greater than min_active which is set to min(}(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h``WQ_DFL_MIN_ACTIVE``h]hWQ_DFL_MIN_ACTIVE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhI). This means that the sum of per-node max_active’s may be larger than }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjubh)}(hbFor detailed information on ``WQ_``\* flags, please refer to Documentation/core-api/workqueue.rst.h](hFor detailed information on }(hjhhhNhNubj)}(h``WQ_``h]hWQ_}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh?* flags, please refer to Documentation/core-api/workqueue.rst.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hj hhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh on failure.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!devm_alloc_workqueue (C function)c.devm_alloc_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h}struct workqueue_struct * devm_alloc_workqueue (struct device *dev, const char *fmt, unsigned int flags, int max_active, ...)h]j)}(h{struct workqueue_struct *devm_alloc_workqueue(struct device *dev, const char *fmt, unsigned int flags, int max_active, ...)h](j)}(hjh]hstruct}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFhhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM ubj)}(h h]h }(hjXhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjFhhhjWhM ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjihhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjkmodnameN classnameNj7j:)}j=]j@)}j3devm_alloc_workqueuesbc.devm_alloc_workqueueasbuh1hhjFhhhjWhM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjFhhhjWhM ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjFhhhjWhM ubj)}(hdevm_alloc_workqueueh]j)}(hjh]hdevm_alloc_workqueue}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjFhhhjWhM ubj|)}(hN(struct device *dev, const char *fmt, unsigned int flags, int max_active, ...)h](j)}(hstruct device *devh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hdeviceh]hdevice}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]jc.devm_alloc_workqueueasbuh1hhjubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hdevh]hdev}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hconst char *fmth](j)}(hjh]hconst}(hj4 hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0 ubj)}(h h]h }(hjA hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj0 ubj)}(hcharh]hchar}(hjO hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0 ubj)}(h h]h }(hj] hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj0 ubjU)}(hjuh]h*}(hjk hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj0 ubj)}(hfmth]hfmt}(hjx hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj0 ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hunsigned int flagsh](j)}(hunsignedh]hunsigned}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubj)}(hinth]hint}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubj)}(hflagsh]hflags}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint max_activeh](j)}(hinth]hint}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubj)}(h max_activeh]h max_active}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(h...h]jU)}(hjph]h...}(hj!hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj!ubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjFhhhjWhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjBhhhjWhM ubah}(h]j=ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjWhM hj?hhubjC)}(hhh]h)}(h%Resource-managed allocate a workqueueh]h%Resource-managed allocate a workqueue}(hj@!hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM hj=!hhubah}(h]h ]h"]h$]h&]uh1jBhj?hhhjWhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejX!jfjX!jgjhjiuh1jhhhjhNhNubjk)}(hX=**Parameters** ``struct device *dev`` Device to allocate workqueue for ``const char *fmt`` printf format for the name of the workqueue ``unsigned int flags`` WQ_* flags ``int max_active`` max in-flight work items, 0 for default ``...`` args for **fmt** **Description** Resource managed workqueue, see alloc_workqueue() for details. The workqueue will be automatically destroyed on driver detach. Typically this should be used in drivers already relying on devm interafaces. **Return** Pointer to the allocated workqueue on success, ``NULL`` on failure.h](h)}(h**Parameters**h]j)}(hjb!h]h Parameters}(hjd!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`!ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM hj\!ubjg)}(hhh](jl)}(h8``struct device *dev`` Device to allocate workqueue for h](jr)}(h``struct device *dev``h]j)}(hj!h]hstruct device *dev}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM hj{!ubj)}(hhh]h)}(h Device to allocate workqueue forh]h Device to allocate workqueue for}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!hM hj!ubah}(h]h ]h"]h$]h&]uh1jhj{!ubeh}(h]h ]h"]h$]h&]uh1jkhj!hM hjx!ubjl)}(h@``const char *fmt`` printf format for the name of the workqueue h](jr)}(h``const char *fmt``h]j)}(hj!h]hconst char *fmt}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM hj!ubj)}(hhh]h)}(h+printf format for the name of the workqueueh]h+printf format for the name of the workqueue}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!hM hj!ubah}(h]h ]h"]h$]h&]uh1jhj!ubeh}(h]h ]h"]h$]h&]uh1jkhj!hM hjx!ubjl)}(h"``unsigned int flags`` WQ_* flags h](jr)}(h``unsigned int flags``h]j)}(hj!h]hunsigned int flags}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM hj!ubj)}(hhh]h)}(h WQ_* flagsh]h WQ_* flags}(hj "hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj"hM hj "ubah}(h]h ]h"]h$]h&]uh1jhj!ubeh}(h]h ]h"]h$]h&]uh1jkhj"hM hjx!ubjl)}(h;``int max_active`` max in-flight work items, 0 for default h](jr)}(h``int max_active``h]j)}(hj,"h]hint max_active}(hj."hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*"ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM hj&"ubj)}(hhh]h)}(h'max in-flight work items, 0 for defaulth]h'max in-flight work items, 0 for default}(hjE"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjA"hM hjB"ubah}(h]h ]h"]h$]h&]uh1jhj&"ubeh}(h]h ]h"]h$]h&]uh1jkhjA"hM hjx!ubjl)}(h``...`` args for **fmt** h](jr)}(h``...``h]j)}(hje"h]h...}(hjg"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjc"ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj_"ubj)}(hhh]h)}(hargs for **fmt**h](h args for }(hj~"hhhNhNubj)}(h**fmt**h]hfmt}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~"ubeh}(h]h ]h"]h$]h&]uh1hhjz"hMhj{"ubah}(h]h ]h"]h$]h&]uh1jhj_"ubeh}(h]h ]h"]h$]h&]uh1jkhjz"hMhjx!ubeh}(h]h ]h"]h$]h&]uh1jfhj\!ubh)}(h**Description**h]j)}(hj"h]h Description}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj\!ubh)}(h>Resource managed workqueue, see alloc_workqueue() for details.h]h>Resource managed workqueue, see alloc_workqueue() for details.}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj\!ubh)}(hThe workqueue will be automatically destroyed on driver detach. Typically this should be used in drivers already relying on devm interafaces.h]hThe workqueue will be automatically destroyed on driver detach. Typically this should be used in drivers already relying on devm interafaces.}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj\!ubh)}(h **Return**h]j)}(hj"h]hReturn}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj\!ubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hj"hhhNhNubj)}(h``NULL``h]hNULL}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj"ubh on failure.}(hj"hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj\!ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j(alloc_workqueue_lockdep_map (C function)c.alloc_workqueue_lockdep_maphNtauh1jhjhhhNhNubj)}(hhh](j)}(hstruct workqueue_struct * alloc_workqueue_lockdep_map (const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h]j)}(hstruct workqueue_struct *alloc_workqueue_lockdep_map(const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h](j)}(hjh]hstruct}(hj;#hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7#hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM ubj)}(h h]h }(hjI#hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj7#hhhjH#hM ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjZ#hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjW#ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj\#modnameN classnameNj7j:)}j=]j@)}j3alloc_workqueue_lockdep_mapsbc.alloc_workqueue_lockdep_mapasbuh1hhj7#hhhjH#hM ubj)}(h h]h }(hj{#hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj7#hhhjH#hM ubjU)}(hjuh]h*}(hj#hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj7#hhhjH#hM ubj)}(halloc_workqueue_lockdep_maph]j)}(hjx#h]halloc_workqueue_lockdep_map}(hj#hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj#ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj7#hhhjH#hM ubj|)}(h[(const char *fmt, unsigned int flags, int max_active, struct lockdep_map *lockdep_map, ...)h](j)}(hconst char *fmth](j)}(hjh]hconst}(hj#hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj#ubj)}(h h]h }(hj#hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj#ubj)}(hcharh]hchar}(hj#hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj#ubj)}(h h]h }(hj#hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj#ubjU)}(hjuh]h*}(hj#hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj#ubj)}(hfmth]hfmt}(hj#hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj#ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj#ubj)}(hunsigned int flagsh](j)}(hunsignedh]hunsigned}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj$ubj)}(h h]h }(hj $hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj$ubj)}(hinth]hint}(hj.$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj$ubj)}(h h]h }(hj<$hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj$ubj)}(hflagsh]hflags}(hjJ$hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj$ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj#ubj)}(hint max_activeh](j)}(hinth]hint}(hjc$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj_$ubj)}(h h]h }(hjq$hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj_$ubj)}(h max_activeh]h max_active}(hj$hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj_$ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj#ubj)}(hstruct lockdep_map *lockdep_maph](j)}(hjh]hstruct}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj$ubj)}(h h]h }(hj$hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj$ubh)}(hhh]j)}(h lockdep_maph]h lockdep_map}(hj$hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj$ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj$modnameN classnameNj7j:)}j=]jv#c.alloc_workqueue_lockdep_mapasbuh1hhj$ubj)}(h h]h }(hj$hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj$ubjU)}(hjuh]h*}(hj$hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj$ubj)}(h lockdep_maph]h lockdep_map}(hj$hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj$ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj#ubj)}(h...h]jU)}(hjph]h...}(hj%hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj%ubah}(h]h ]h"]h$]h&]noemphjjuh1jhj#ubeh}(h]h ]h"]h$]h&]jjuh1j{hj7#hhhjH#hM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj3#hhhjH#hM ubah}(h]j.#ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjH#hM hj0#hhubjC)}(hhh]h)}(h2allocate a workqueue with user-defined lockdep_maph]h2allocate a workqueue with user-defined lockdep_map}(hj1%hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM hj.%hhubah}(h]h ]h"]h$]h&]uh1jBhj0#hhhjH#hM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejI%jfjI%jgjhjiuh1jhhhjhNhNubjk)}(hX&**Parameters** ``const char *fmt`` printf format for the name of the workqueue ``unsigned int flags`` WQ_* flags ``int max_active`` max in-flight work items, 0 for default ``struct lockdep_map *lockdep_map`` user-defined lockdep_map ``...`` args for **fmt** **Description** Same as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation. **Return** Pointer to the allocated workqueue on success, ``NULL`` on failure.h](h)}(h**Parameters**h]j)}(hjS%h]h Parameters}(hjU%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQ%ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM$hjM%ubjg)}(hhh](jl)}(h@``const char *fmt`` printf format for the name of the workqueue h](jr)}(h``const char *fmt``h]j)}(hjr%h]hconst char *fmt}(hjt%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjp%ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM!hjl%ubj)}(hhh]h)}(h+printf format for the name of the workqueueh]h+printf format for the name of the workqueue}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hM!hj%ubah}(h]h ]h"]h$]h&]uh1jhjl%ubeh}(h]h ]h"]h$]h&]uh1jkhj%hM!hji%ubjl)}(h"``unsigned int flags`` WQ_* flags h](jr)}(h``unsigned int flags``h]j)}(hj%h]hunsigned int flags}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM"hj%ubj)}(hhh]h)}(h WQ_* flagsh]h WQ_* flags}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hM"hj%ubah}(h]h ]h"]h$]h&]uh1jhj%ubeh}(h]h ]h"]h$]h&]uh1jkhj%hM"hji%ubjl)}(h;``int max_active`` max in-flight work items, 0 for default h](jr)}(h``int max_active``h]j)}(hj%h]hint max_active}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM#hj%ubj)}(hhh]h)}(h'max in-flight work items, 0 for defaulth]h'max in-flight work items, 0 for default}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hM#hj%ubah}(h]h ]h"]h$]h&]uh1jhj%ubeh}(h]h ]h"]h$]h&]uh1jkhj%hM#hji%ubjl)}(h=``struct lockdep_map *lockdep_map`` user-defined lockdep_map h](jr)}(h#``struct lockdep_map *lockdep_map``h]j)}(hj&h]hstruct lockdep_map *lockdep_map}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM$hj&ubj)}(hhh]h)}(huser-defined lockdep_maph]huser-defined lockdep_map}(hj6&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj2&hM$hj3&ubah}(h]h ]h"]h$]h&]uh1jhj&ubeh}(h]h ]h"]h$]h&]uh1jkhj2&hM$hji%ubjl)}(h``...`` args for **fmt** h](jr)}(h``...``h]j)}(hjV&h]h...}(hjX&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjT&ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM%hjP&ubj)}(hhh]h)}(hargs for **fmt**h](h args for }(hjo&hhhNhNubj)}(h**fmt**h]hfmt}(hjw&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjo&ubeh}(h]h ]h"]h$]h&]uh1hhjk&hM%hjl&ubah}(h]h ]h"]h$]h&]uh1jhjP&ubeh}(h]h ]h"]h$]h&]uh1jkhjk&hM%hji%ubeh}(h]h ]h"]h$]h&]uh1jfhjM%ubh)}(h**Description**h]j)}(hj&h]h Description}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM'hjM%ubh)}(hSame as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation.h]hSame as alloc_workqueue but with the a user-define lockdep_map. Useful for workqueues created with the same purpose and to avoid leaking a lockdep_map on each workqueue creation.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM&hjM%ubh)}(h **Return**h]j)}(hj&h]hReturn}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM*hjM%ubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hj&hhhNhNubj)}(h``NULL``h]hNULL}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubh on failure.}(hj&hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM+hjM%ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j-alloc_ordered_workqueue_lockdep_map (C macro)%c.alloc_ordered_workqueue_lockdep_maphNtauh1jhjhhhNhNubj)}(hhh](j)}(h#alloc_ordered_workqueue_lockdep_maph]j)}(h#alloc_ordered_workqueue_lockdep_maph]j)}(h#alloc_ordered_workqueue_lockdep_maph]j)}(hj'h]h#alloc_ordered_workqueue_lockdep_map}(hj!'hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj'ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj'hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM3ubah}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj'hhhj4'hM3ubah}(h]j'ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj4'hM3hj'hhubjC)}(hhh]h}(h]h ]h"]h$]h&]uh1jBhj'hhhj4'hM3ubeh}(h]h ](j_macroeh"]h$]h&]jdj_jejM'jfjM'jgjhjiuh1jhhhjhNhNubh)}(hJ``alloc_ordered_workqueue_lockdep_map (fmt, flags, lockdep_map, args...)``h]j)}(hjS'h]hFalloc_ordered_workqueue_lockdep_map (fmt, flags, lockdep_map, args...)}(hjU'hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQ'ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM5hjhhubj)}(hhj'ubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hj(hhhNhNubj)}(h``NULL``h]hNULL}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj(ubh on failure.}(hj(hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM?hj'ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!alloc_ordered_workqueue (C macro)c.alloc_ordered_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(halloc_ordered_workqueueh]j)}(halloc_ordered_workqueueh]j)}(halloc_ordered_workqueueh]j)}(hj)h]halloc_ordered_workqueue}(hj)hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj)ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj)hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMHubah}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj)hhhj0)hMHubah}(h]j )ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj0)hMHhj)hhubjC)}(hhh]h}(h]h ]h"]h$]h&]uh1jBhj)hhhj0)hMHubeh}(h]h ](j_macroeh"]h$]h&]jdj_jejI)jfjI)jgjhjiuh1jhhhjhNhNubh)}(h1``alloc_ordered_workqueue (fmt, flags, args...)``h]j)}(hjO)h]h-alloc_ordered_workqueue (fmt, flags, args...)}(hjQ)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjM)ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMJhjhhubj)}(hallocate an ordered workqueue h]h)}(hallocate an ordered workqueueh]hallocate an ordered workqueue}(hji)hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMHhje)ubah}(h]h ]h"]h$]h&]uh1jhjw)hMHhjhhubjk)}(hX**Parameters** ``fmt`` printf format for the name of the workqueue ``flags`` WQ_* flags (only WQ_FREEZABLE and WQ_MEM_RECLAIM are meaningful) ``args...`` args for **fmt** **Description** Allocate an ordered workqueue. An ordered workqueue executes at most one work item at any given time in the queued order. They are implemented as unbound workqueues with **max_active** of one. **Return** Pointer to the allocated workqueue on success, ``NULL`` on failure.h](h)}(h**Parameters**h]j)}(hj)h]h Parameters}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMLhj~)ubjg)}(hhh](jl)}(h4``fmt`` printf format for the name of the workqueue h](jr)}(h``fmt``h]j)}(hj)h]hfmt}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMIhj)ubj)}(hhh]h)}(h+printf format for the name of the workqueueh]h+printf format for the name of the workqueue}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj)hMIhj)ubah}(h]h ]h"]h$]h&]uh1jhj)ubeh}(h]h ]h"]h$]h&]uh1jkhj)hMIhj)ubjl)}(hK``flags`` WQ_* flags (only WQ_FREEZABLE and WQ_MEM_RECLAIM are meaningful) h](jr)}(h ``flags``h]j)}(hj)h]hflags}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMJhj)ubj)}(hhh]h)}(h@WQ_* flags (only WQ_FREEZABLE and WQ_MEM_RECLAIM are meaningful)h]h@WQ_* flags (only WQ_FREEZABLE and WQ_MEM_RECLAIM are meaningful)}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj)hMJhj)ubah}(h]h ]h"]h$]h&]uh1jhj)ubeh}(h]h ]h"]h$]h&]uh1jkhj)hMJhj)ubjl)}(h``args...`` args for **fmt** h](jr)}(h ``args...``h]j)}(hj*h]hargs...}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMKhj*ubj)}(hhh]h)}(hargs for **fmt**h](h args for }(hj.*hhhNhNubj)}(h**fmt**h]hfmt}(hj6*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.*ubeh}(h]h ]h"]h$]h&]uh1hhj**hMKhj+*ubah}(h]h ]h"]h$]h&]uh1jhj*ubeh}(h]h ]h"]h$]h&]uh1jkhj**hMKhj)ubeh}(h]h ]h"]h$]h&]uh1jfhj~)ubh)}(h**Description**h]j)}(hj^*h]h Description}(hj`*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\*ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMMhj~)ubh)}(hAllocate an ordered workqueue. An ordered workqueue executes at most one work item at any given time in the queued order. They are implemented as unbound workqueues with **max_active** of one.h](hAllocate an ordered workqueue. An ordered workqueue executes at most one work item at any given time in the queued order. They are implemented as unbound workqueues with }(hjt*hhhNhNubj)}(h**max_active**h]h max_active}(hj|*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjt*ubh of one.}(hjt*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMLhj~)ubh)}(h **Return**h]j)}(hj*h]hReturn}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMPhj~)ubh)}(hCPointer to the allocated workqueue on success, ``NULL`` on failure.h](h/Pointer to the allocated workqueue on success, }(hj*hhhNhNubj)}(h``NULL``h]hNULL}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubh on failure.}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMQhj~)ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jqueue_work (C function) c.queue_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hGbool queue_work (struct workqueue_struct *wq, struct work_struct *work)h]j)}(hFbool queue_work(struct workqueue_struct *wq, struct work_struct *work)h](j)}(hboolh]hbool}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMubj)}(h h]h }(hj*hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj*hhhj*hMubj)}(h queue_workh]j)}(h queue_workh]h queue_work}(hj+hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj +ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj*hhhj*hMubj|)}(h7(struct workqueue_struct *wq, struct work_struct *work)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj++hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj'+ubj)}(h h]h }(hj8+hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj'+ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjI+hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjF+ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjK+modnameN classnameNj7j:)}j=]j@)}j3j+sb c.queue_workasbuh1hhj'+ubj)}(h h]h }(hji+hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj'+ubjU)}(hjuh]h*}(hjw+hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj'+ubj)}(hwqh]hwq}(hj+hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj'+ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj#+ubj)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hj+hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj+ubj)}(h h]h }(hj+hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj+ubh)}(hhh]j)}(h work_structh]h work_struct}(hj+hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj+ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj+modnameN classnameNj7j:)}j=]je+ c.queue_workasbuh1hhj+ubj)}(h h]h }(hj+hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj+ubjU)}(hjuh]h*}(hj+hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj+ubj)}(hworkh]hwork}(hj+hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj+ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj#+ubeh}(h]h ]h"]h$]h&]jjuh1j{hj*hhhj*hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj*hhhj*hMubah}(h]j*ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj*hMhj*hhubjC)}(hhh]h)}(hqueue work on a workqueueh]hqueue work on a workqueue}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj,hhubah}(h]h ]h"]h$]h&]uh1jBhj*hhhj*hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej6,jfj6,jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue to use ``struct work_struct *work`` work to queue **Description** Returns ``false`` if **work** was already on a queue, ``true`` otherwise. We queue the work to the CPU on which it was submitted, but if the CPU dies it can be processed by another CPU. Memory-ordering properties: If it returns ``true``, guarantees that all stores preceding the call to queue_work() in the program order will be visible from the CPU which will execute **work** by the time such work executes, e.g., { x is initially 0 } CPU0 CPU1 WRITE_ONCE(x, 1); [ **work** is being executed ] r0 = queue_work(wq, work); r1 = READ_ONCE(x); Forbids: r0 == true && r1 == 0h](h)}(h**Parameters**h]j)}(hj@,h]h Parameters}(hjB,hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>,ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj:,ubjg)}(hhh](jl)}(h1``struct workqueue_struct *wq`` workqueue to use h](jr)}(h``struct workqueue_struct *wq``h]j)}(hj_,h]hstruct workqueue_struct *wq}(hja,hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj],ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjY,ubj)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjx,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjt,hMhju,ubah}(h]h ]h"]h$]h&]uh1jhjY,ubeh}(h]h ]h"]h$]h&]uh1jkhjt,hMhjV,ubjl)}(h+``struct work_struct *work`` work to queue h](jr)}(h``struct work_struct *work``h]j)}(hj,h]hstruct work_struct *work}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj,ubj)}(hhh]h)}(h work to queueh]h work to queue}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj,hMhj,ubah}(h]h ]h"]h$]h&]uh1jhj,ubeh}(h]h ]h"]h$]h&]uh1jkhj,hMhjV,ubeh}(h]h ]h"]h$]h&]uh1jfhj:,ubh)}(h**Description**h]j)}(hj,h]h Description}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj:,ubh)}(hIReturns ``false`` if **work** was already on a queue, ``true`` otherwise.h](hReturns }(hj,hhhNhNubj)}(h ``false``h]hfalse}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubh if }(hj,hhhNhNubj)}(h**work**h]hwork}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubh was already on a queue, }(hj,hhhNhNubj)}(h``true``h]htrue}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubh otherwise.}(hj,hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj:,ubh)}(hoWe queue the work to the CPU on which it was submitted, but if the CPU dies it can be processed by another CPU.h]hoWe queue the work to the CPU on which it was submitted, but if the CPU dies it can be processed by another CPU.}(hj.-hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj:,ubh)}(hMemory-ordering properties: If it returns ``true``, guarantees that all stores preceding the call to queue_work() in the program order will be visible from the CPU which will execute **work** by the time such work executes, e.g.,h](h+Memory-ordering properties: If it returns }(hj=-hhhNhNubj)}(h``true``h]htrue}(hjE-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=-ubh, guarantees that all stores preceding the call to queue_work() in the program order will be visible from the CPU which will execute }(hj=-hhhNhNubj)}(h**work**h]hwork}(hjW-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=-ubh& by the time such work executes, e.g.,}(hj=-hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj:,ubh)}(h{ x is initially 0 }h]h{ x is initially 0 }}(hjp-hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj:,ubj)}(hCPU0 CPU1 WRITE_ONCE(x, 1); [ **work** is being executed ] r0 = queue_work(wq, work); r1 = READ_ONCE(x); h](h)}(h'CPU0 CPU1h]h'CPU0 CPU1}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj-ubh)}(hyWRITE_ONCE(x, 1); [ **work** is being executed ] r0 = queue_work(wq, work); r1 = READ_ONCE(x);h](h%WRITE_ONCE(x, 1); [ }(hj-hhhNhNubj)}(h**work**h]hwork}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubhL is being executed ] r0 = queue_work(wq, work); r1 = READ_ONCE(x);}(hj-hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj-ubeh}(h]h ]h"]h$]h&]uh1jhj-hMhj:,ubh)}(hForbids: r0 == true && r1 == 0h]hForbids: r0 == true && r1 == 0}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj:,ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jqueue_delayed_work (C function)c.queue_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hfbool queue_delayed_work (struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]j)}(hebool queue_delayed_work(struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hj*h]hbool}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMubj)}(h h]h }(hj-hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj-hhhj-hMubj)}(hqueue_delayed_workh]j)}(hqueue_delayed_workh]hqueue_delayed_work}(hj.hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj.ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj-hhhj-hMubj|)}(hN(struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj$.hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj .ubj)}(h h]h }(hj1.hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj .ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjB.hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj?.ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjD.modnameN classnameNj7j:)}j=]j@)}j3j .sbc.queue_delayed_workasbuh1hhj .ubj)}(h h]h }(hjb.hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj .ubjU)}(hjuh]h*}(hjp.hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj .ubj)}(hwqh]hwq}(hj}.hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj .ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj.ubj)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hj.hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubj)}(h h]h }(hj.hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj.ubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hj.hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj.modnameN classnameNj7j:)}j=]j^.c.queue_delayed_workasbuh1hhj.ubj)}(h h]h }(hj.hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj.ubjU)}(hjuh]h*}(hj.hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj.ubj)}(hdworkh]hdwork}(hj.hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj.ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj.ubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hj/hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/ubj)}(h h]h }(hj/hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/ubj)}(hlongh]hlong}(hj"/hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/ubj)}(h h]h }(hj0/hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/ubj)}(hdelayh]hdelay}(hj>/hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj/ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj.ubeh}(h]h ]h"]h$]h&]jjuh1j{hj-hhhj-hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj-hhhj-hMubah}(h]j-ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj-hMhj-hhubjC)}(hhh]h)}(h%queue work on a workqueue after delayh]h%queue work on a workqueue after delay}(hjh/hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhje/hhubah}(h]h ]h"]h$]h&]uh1jBhj-hhhj-hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej/jfj/jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` delayable work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** Equivalent to queue_delayed_work_on() but tries to use the local CPU.h](h)}(h**Parameters**h]j)}(hj/h]h Parameters}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj/ubjg)}(hhh](jl)}(h1``struct workqueue_struct *wq`` workqueue to use h](jr)}(h``struct workqueue_struct *wq``h]j)}(hj/h]hstruct workqueue_struct *wq}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj/ubj)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/hMhj/ubah}(h]h ]h"]h$]h&]uh1jhj/ubeh}(h]h ]h"]h$]h&]uh1jkhj/hMhj/ubjl)}(h7``struct delayed_work *dwork`` delayable work to queue h](jr)}(h``struct delayed_work *dwork``h]j)}(hj/h]hstruct delayed_work *dwork}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj/ubj)}(hhh]h)}(hdelayable work to queueh]hdelayable work to queue}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/hMhj/ubah}(h]h ]h"]h$]h&]uh1jhj/ubeh}(h]h ]h"]h$]h&]uh1jkhj/hMhj/ubjl)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](jr)}(h``unsigned long delay``h]j)}(hj0h]hunsigned long delay}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj0ubj)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hj40hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj00hMhj10ubah}(h]h ]h"]h$]h&]uh1jhj0ubeh}(h]h ]h"]h$]h&]uh1jkhj00hMhj/ubeh}(h]h ]h"]h$]h&]uh1jfhj/ubh)}(h**Description**h]j)}(hjV0h]h Description}(hjX0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjT0ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj/ubh)}(hEEquivalent to queue_delayed_work_on() but tries to use the local CPU.h]hEEquivalent to queue_delayed_work_on() but tries to use the local CPU.}(hjl0hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj/ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jmod_delayed_work (C function)c.mod_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hdbool mod_delayed_work (struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]j)}(hcbool mod_delayed_work(struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hj*h]hbool}(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMubj)}(h h]h }(hj0hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj0hhhj0hMubj)}(hmod_delayed_workh]j)}(hmod_delayed_workh]hmod_delayed_work}(hj0hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj0ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj0hhhj0hMubj|)}(hN(struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0ubj)}(h h]h }(hj0hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj0ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj0hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj0modnameN classnameNj7j:)}j=]j@)}j3j0sbc.mod_delayed_workasbuh1hhj0ubj)}(h h]h }(hj1hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj0ubjU)}(hjuh]h*}(hj#1hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj0ubj)}(hwqh]hwq}(hj01hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj0ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj0ubj)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hjI1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjE1ubj)}(h h]h }(hjV1hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjE1ubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjg1hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjd1ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetji1modnameN classnameNj7j:)}j=]j1c.mod_delayed_workasbuh1hhjE1ubj)}(h h]h }(hj1hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjE1ubjU)}(hjuh]h*}(hj1hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjE1ubj)}(hdworkh]hdwork}(hj1hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjE1ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj0ubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1ubj)}(h h]h }(hj1hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1ubj)}(hlongh]hlong}(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1ubj)}(h h]h }(hj1hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1ubj)}(hdelayh]hdelay}(hj1hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj1ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj0ubeh}(h]h ]h"]h$]h&]jjuh1j{hj0hhhj0hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj0hhhj0hMubah}(h]j0ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj0hMhj0hhubjC)}(hhh]h)}(h'modify delay of or queue a delayed workh]h'modify delay of or queue a delayed work}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj2hhubah}(h]h ]h"]h$]h&]uh1jBhj0hhhj0hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej32jfj32jgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** mod_delayed_work_on() on local CPU.h](h)}(h**Parameters**h]j)}(hj=2h]h Parameters}(hj?2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;2ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj72ubjg)}(hhh](jl)}(h1``struct workqueue_struct *wq`` workqueue to use h](jr)}(h``struct workqueue_struct *wq``h]j)}(hj\2h]hstruct workqueue_struct *wq}(hj^2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZ2ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjV2ubj)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hju2hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjq2hMhjr2ubah}(h]h ]h"]h$]h&]uh1jhjV2ubeh}(h]h ]h"]h$]h&]uh1jkhjq2hMhjS2ubjl)}(h-``struct delayed_work *dwork`` work to queue h](jr)}(h``struct delayed_work *dwork``h]j)}(hj2h]hstruct delayed_work *dwork}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj2ubj)}(hhh]h)}(h work to queueh]h work to queue}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj2hMhj2ubah}(h]h ]h"]h$]h&]uh1jhj2ubeh}(h]h ]h"]h$]h&]uh1jkhj2hMhjS2ubjl)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](jr)}(h``unsigned long delay``h]j)}(hj2h]hunsigned long delay}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj2ubj)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj2hMhj2ubah}(h]h ]h"]h$]h&]uh1jhj2ubeh}(h]h ]h"]h$]h&]uh1jkhj2hMhjS2ubeh}(h]h ]h"]h$]h&]uh1jfhj72ubh)}(h**Description**h]j)}(hj 3h]h Description}(hj 3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj72ubh)}(h#mod_delayed_work_on() on local CPU.h]h#mod_delayed_work_on() on local CPU.}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj72ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jschedule_work_on (C function)c.schedule_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(h9bool schedule_work_on (int cpu, struct work_struct *work)h]j)}(h8bool schedule_work_on(int cpu, struct work_struct *work)h](j)}(hj*h]hbool}(hjN3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJ3hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMubj)}(h h]h }(hj\3hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjJ3hhhj[3hMubj)}(hschedule_work_onh]j)}(hschedule_work_onh]hschedule_work_on}(hjn3hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjj3ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjJ3hhhj[3hMubj|)}(h#(int cpu, struct work_struct *work)h](j)}(hint cpuh](j)}(hinth]hint}(hj3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3ubj)}(h h]h }(hj3hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj3ubj)}(hcpuh]hcpu}(hj3hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj3ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj3ubj)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hj3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3ubj)}(h h]h }(hj3hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj3ubh)}(hhh]j)}(h work_structh]h work_struct}(hj3hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj3modnameN classnameNj7j:)}j=]j@)}j3jp3sbc.schedule_work_onasbuh1hhj3ubj)}(h h]h }(hj3hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj3ubjU)}(hjuh]h*}(hj 4hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj3ubj)}(hworkh]hwork}(hj4hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj3ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj3ubeh}(h]h ]h"]h$]h&]jjuh1j{hjJ3hhhj[3hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjF3hhhj[3hMubah}(h]jA3ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj[3hMhjC3hhubjC)}(hhh]h)}(hput work task on a specific cpuh]hput work task on a specific cpu}(hjB4hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj?4hhubah}(h]h ]h"]h$]h&]uh1jBhjC3hhhj[3hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejZ4jfjZ4jgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``int cpu`` cpu to put the work task on ``struct work_struct *work`` job to be done **Description** This puts a job on a specific cpuh](h)}(h**Parameters**h]j)}(hjd4h]h Parameters}(hjf4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjb4ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj^4ubjg)}(hhh](jl)}(h(``int cpu`` cpu to put the work task on h](jr)}(h ``int cpu``h]j)}(hj4h]hint cpu}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj}4ubj)}(hhh]h)}(hcpu to put the work task onh]hcpu to put the work task on}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hMhj4ubah}(h]h ]h"]h$]h&]uh1jhj}4ubeh}(h]h ]h"]h$]h&]uh1jkhj4hMhjz4ubjl)}(h,``struct work_struct *work`` job to be done h](jr)}(h``struct work_struct *work``h]j)}(hj4h]hstruct work_struct *work}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj4ubj)}(hhh]h)}(hjob to be doneh]hjob to be done}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hMhj4ubah}(h]h ]h"]h$]h&]uh1jhj4ubeh}(h]h ]h"]h$]h&]uh1jkhj4hMhjz4ubeh}(h]h ]h"]h$]h&]uh1jfhj^4ubh)}(h**Description**h]j)}(hj4h]h Description}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj^4ubh)}(h!This puts a job on a specific cpuh]h!This puts a job on a specific cpu}(hj 5hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj^4ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jschedule_work (C function)c.schedule_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h-bool schedule_work (struct work_struct *work)h]j)}(h,bool schedule_work(struct work_struct *work)h](j)}(hj*h]hbool}(hj<5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj85hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMubj)}(h h]h }(hjJ5hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj85hhhjI5hMubj)}(h schedule_workh]j)}(h schedule_workh]h schedule_work}(hj\5hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjX5ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj85hhhjI5hMubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjx5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjt5ubj)}(h h]h }(hj5hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjt5ubh)}(hhh]j)}(h work_structh]h work_struct}(hj5hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj5ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj5modnameN classnameNj7j:)}j=]j@)}j3j^5sbc.schedule_workasbuh1hhjt5ubj)}(h h]h }(hj5hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjt5ubjU)}(hjuh]h*}(hj5hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjt5ubj)}(hworkh]hwork}(hj5hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjt5ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjp5ubah}(h]h ]h"]h$]h&]jjuh1j{hj85hhhjI5hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj45hhhjI5hMubah}(h]j/5ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjI5hMhj15hhubjC)}(hhh]h)}(h"put work task in per-CPU workqueueh]h"put work task in per-CPU workqueue}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj5hhubah}(h]h ]h"]h$]h&]uh1jBhj15hhhjI5hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej6jfj6jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct work_struct *work`` job to be done **Description** Returns ``false`` if **work** was already on the system per-CPU workqueue and ``true`` otherwise. This puts a job in the system per-CPU workqueue if it was not already queued and leaves it in the same position on the system per-CPU workqueue otherwise. Shares the same memory-ordering properties of queue_work(), cf. the DocBook header of queue_work().h](h)}(h**Parameters**h]j)}(hj6h]h Parameters}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj6ubjg)}(hhh]jl)}(h,``struct work_struct *work`` job to be done h](jr)}(h``struct work_struct *work``h]j)}(hj<6h]hstruct work_struct *work}(hj>6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:6ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj66ubj)}(hhh]h)}(hjob to be doneh]hjob to be done}(hjU6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjQ6hMhjR6ubah}(h]h ]h"]h$]h&]uh1jhj66ubeh}(h]h ]h"]h$]h&]uh1jkhjQ6hMhj36ubah}(h]h ]h"]h$]h&]uh1jfhj6ubh)}(h**Description**h]j)}(hjw6h]h Description}(hjy6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhju6ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj6ubh)}(haReturns ``false`` if **work** was already on the system per-CPU workqueue and ``true`` otherwise.h](hReturns }(hj6hhhNhNubj)}(h ``false``h]hfalse}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubh if }(hj6hhhNhNubj)}(h**work**h]hwork}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubh1 was already on the system per-CPU workqueue and }(hj6hhhNhNubj)}(h``true``h]htrue}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj6ubh otherwise.}(hj6hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj6ubh)}(hThis puts a job in the system per-CPU workqueue if it was not already queued and leaves it in the same position on the system per-CPU workqueue otherwise.h]hThis puts a job in the system per-CPU workqueue if it was not already queued and leaves it in the same position on the system per-CPU workqueue otherwise.}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj6ubh)}(hcShares the same memory-ordering properties of queue_work(), cf. the DocBook header of queue_work().h]hcShares the same memory-ordering properties of queue_work(), cf. the DocBook header of queue_work().}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj6ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"enable_and_queue_work (C function)c.enable_and_queue_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hRbool enable_and_queue_work (struct workqueue_struct *wq, struct work_struct *work)h]j)}(hQbool enable_and_queue_work(struct workqueue_struct *wq, struct work_struct *work)h](j)}(hj*h]hbool}(hj7hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj 7hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMubj)}(h h]h }(hj7hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj 7hhhj7hMubj)}(henable_and_queue_workh]j)}(henable_and_queue_workh]henable_and_queue_work}(hj07hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj,7ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj 7hhhj7hMubj|)}(h7(struct workqueue_struct *wq, struct work_struct *work)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjL7hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjH7ubj)}(h h]h }(hjY7hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjH7ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjj7hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjg7ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjl7modnameN classnameNj7j:)}j=]j@)}j3j27sbc.enable_and_queue_workasbuh1hhjH7ubj)}(h h]h }(hj7hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjH7ubjU)}(hjuh]h*}(hj7hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjH7ubj)}(hwqh]hwq}(hj7hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjH7ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjD7ubj)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hj7hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7ubj)}(h h]h }(hj7hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj7ubh)}(hhh]j)}(h work_structh]h work_struct}(hj7hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj7modnameN classnameNj7j:)}j=]j7c.enable_and_queue_workasbuh1hhj7ubj)}(h h]h }(hj7hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj7ubjU)}(hjuh]h*}(hj8hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj7ubj)}(hworkh]hwork}(hj8hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj7ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjD7ubeh}(h]h ]h"]h$]h&]jjuh1j{hj 7hhhj7hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj7hhhj7hMubah}(h]j7ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj7hMhj7hhubjC)}(hhh]h)}(h4Enable and queue a work item on a specific workqueueh]h4Enable and queue a work item on a specific workqueue}(hj?8hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj<8hhubah}(h]h ]h"]h$]h&]uh1jBhj7hhhj7hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejW8jfjW8jgjhjiuh1jhhhjhNhNubjk)}(hX **Parameters** ``struct workqueue_struct *wq`` The target workqueue ``struct work_struct *work`` The work item to be enabled and queued **Description** This function combines the operations of enable_work() and queue_work(), providing a convenient way to enable and queue a work item in a single call. It invokes enable_work() on **work** and then queues it if the disable depth reached 0. Returns ``true`` if the disable depth reached 0 and **work** is queued, and ``false`` otherwise. Note that **work** is always queued when disable depth reaches zero. If the desired behavior is queueing only if certain events took place while **work** is disabled, the user should implement the necessary state tracking and perform explicit conditional queueing after enable_work().h](h)}(h**Parameters**h]j)}(hja8h]h Parameters}(hjc8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_8ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj[8ubjg)}(hhh](jl)}(h5``struct workqueue_struct *wq`` The target workqueue h](jr)}(h``struct workqueue_struct *wq``h]j)}(hj8h]hstruct workqueue_struct *wq}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~8ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhjz8ubj)}(hhh]h)}(hThe target workqueueh]hThe target workqueue}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj8hMhj8ubah}(h]h ]h"]h$]h&]uh1jhjz8ubeh}(h]h ]h"]h$]h&]uh1jkhj8hMhjw8ubjl)}(hD``struct work_struct *work`` The work item to be enabled and queued h](jr)}(h``struct work_struct *work``h]j)}(hj8h]hstruct work_struct *work}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj8ubj)}(hhh]h)}(h&The work item to be enabled and queuedh]h&The work item to be enabled and queued}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj8hMhj8ubah}(h]h ]h"]h$]h&]uh1jhj8ubeh}(h]h ]h"]h$]h&]uh1jkhj8hMhjw8ubeh}(h]h ]h"]h$]h&]uh1jfhj[8ubh)}(h**Description**h]j)}(hj8h]h Description}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj[8ubh)}(hXNThis function combines the operations of enable_work() and queue_work(), providing a convenient way to enable and queue a work item in a single call. It invokes enable_work() on **work** and then queues it if the disable depth reached 0. Returns ``true`` if the disable depth reached 0 and **work** is queued, and ``false`` otherwise.h](hThis function combines the operations of enable_work() and queue_work(), providing a convenient way to enable and queue a work item in a single call. It invokes enable_work() on }(hj 9hhhNhNubj)}(h**work**h]hwork}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj 9ubh< and then queues it if the disable depth reached 0. Returns }(hj 9hhhNhNubj)}(h``true``h]htrue}(hj$9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj 9ubh$ if the disable depth reached 0 and }(hj 9hhhNhNubj)}(h**work**h]hwork}(hj69hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj 9ubh is queued, and }(hj 9hhhNhNubj)}(h ``false``h]hfalse}(hjH9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj 9ubh otherwise.}(hj 9hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj[8ubh)}(hXNote that **work** is always queued when disable depth reaches zero. If the desired behavior is queueing only if certain events took place while **work** is disabled, the user should implement the necessary state tracking and perform explicit conditional queueing after enable_work().h](h Note that }(hja9hhhNhNubj)}(h**work**h]hwork}(hji9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhja9ubh is always queued when disable depth reaches zero. If the desired behavior is queueing only if certain events took place while }(hja9hhhNhNubj)}(h**work**h]hwork}(hj{9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhja9ubh is disabled, the user should implement the necessary state tracking and perform explicit conditional queueing after enable_work().}(hja9hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMhj[8ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%schedule_delayed_work_on (C function)c.schedule_delayed_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(hXbool schedule_delayed_work_on (int cpu, struct delayed_work *dwork, unsigned long delay)h]j)}(hWbool schedule_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long delay)h](j)}(hj*h]hbool}(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj9hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM>ubj)}(h h]h }(hj9hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj9hhhj9hM>ubj)}(hschedule_delayed_work_onh]j)}(hschedule_delayed_work_onh]hschedule_delayed_work_on}(hj9hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj9ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj9hhhj9hM>ubj|)}(h:(int cpu, struct delayed_work *dwork, unsigned long delay)h](j)}(hint cpuh](j)}(hinth]hint}(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj9ubj)}(h h]h }(hj9hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj9ubj)}(hcpuh]hcpu}(hj :hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj9ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj9ubj)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hj%:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj!:ubj)}(h h]h }(hj2:hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj!:ubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjC:hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj@:ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjE:modnameN classnameNj7j:)}j=]j@)}j3j9sbc.schedule_delayed_work_onasbuh1hhj!:ubj)}(h h]h }(hjc:hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj!:ubjU)}(hjuh]h*}(hjq:hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj!:ubj)}(hdworkh]hdwork}(hj~:hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj!:ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj9ubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubj)}(h h]h }(hj:hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj:ubj)}(hlongh]hlong}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubj)}(h h]h }(hj:hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj:ubj)}(hdelayh]hdelay}(hj:hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj:ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj9ubeh}(h]h ]h"]h$]h&]jjuh1j{hj9hhhj9hM>ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj9hhhj9hM>ubah}(h]j9ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj9hM>hj9hhubjC)}(hhh]h)}(h2queue work in per-CPU workqueue on CPU after delayh]h2queue work in per-CPU workqueue on CPU after delay}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM>hj:hhubah}(h]h ]h"]h$]h&]uh1jBhj9hhhj9hM>ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej;jfj;jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``int cpu`` cpu to use ``struct delayed_work *dwork`` job to be done ``unsigned long delay`` number of jiffies to wait **Description** After waiting for a given time this puts a job in the system per-CPU workqueue on the specified CPU.h](h)}(h**Parameters**h]j)}(hj;h]h Parameters}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMBhj;ubjg)}(hhh](jl)}(h``int cpu`` cpu to use h](jr)}(h ``int cpu``h]j)}(hj:;h]hint cpu}(hj<;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8;ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM?hj4;ubj)}(hhh]h)}(h cpu to useh]h cpu to use}(hjS;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjO;hM?hjP;ubah}(h]h ]h"]h$]h&]uh1jhj4;ubeh}(h]h ]h"]h$]h&]uh1jkhjO;hM?hj1;ubjl)}(h.``struct delayed_work *dwork`` job to be done h](jr)}(h``struct delayed_work *dwork``h]j)}(hjs;h]hstruct delayed_work *dwork}(hju;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjq;ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhM@hjm;ubj)}(hhh]h)}(hjob to be doneh]hjob to be done}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj;hM@hj;ubah}(h]h ]h"]h$]h&]uh1jhjm;ubeh}(h]h ]h"]h$]h&]uh1jkhj;hM@hj1;ubjl)}(h2``unsigned long delay`` number of jiffies to wait h](jr)}(h``unsigned long delay``h]j)}(hj;h]hunsigned long delay}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMAhj;ubj)}(hhh]h)}(hnumber of jiffies to waith]hnumber of jiffies to wait}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj;hMAhj;ubah}(h]h ]h"]h$]h&]uh1jhj;ubeh}(h]h ]h"]h$]h&]uh1jkhj;hMAhj1;ubeh}(h]h ]h"]h$]h&]uh1jfhj;ubh)}(h**Description**h]j)}(hj;h]h Description}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMChj;ubh)}(hdAfter waiting for a given time this puts a job in the system per-CPU workqueue on the specified CPU.h]hdAfter waiting for a given time this puts a job in the system per-CPU workqueue on the specified CPU.}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMBhj;ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"schedule_delayed_work (C function)c.schedule_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hLbool schedule_delayed_work (struct delayed_work *dwork, unsigned long delay)h]j)}(hKbool schedule_delayed_work(struct delayed_work *dwork, unsigned long delay)h](j)}(hj*h]hbool}(hj,<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(<hhh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMMubj)}(h h]h }(hj:<hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj(<hhhj9<hMMubj)}(hschedule_delayed_workh]j)}(hschedule_delayed_workh]hschedule_delayed_work}(hjL<hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjH<ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj(<hhhj9<hMMubj|)}(h1(struct delayed_work *dwork, unsigned long delay)h](j)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hjh<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjd<ubj)}(h h]h }(hju<hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjd<ubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hj<hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj<modnameN classnameNj7j:)}j=]j@)}j3jN<sbc.schedule_delayed_workasbuh1hhjd<ubj)}(h h]h }(hj<hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjd<ubjU)}(hjuh]h*}(hj<hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjd<ubj)}(hdworkh]hdwork}(hj<hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjd<ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`<ubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubj)}(h h]h }(hj<hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj<ubj)}(hlongh]hlong}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubj)}(h h]h }(hj=hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj<ubj)}(hdelayh]hdelay}(hj=hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj<ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj`<ubeh}(h]h ]h"]h$]h&]jjuh1j{hj(<hhhj9<hMMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj$<hhhj9<hMMubah}(h]j<ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj9<hMMhj!<hhubjC)}(hhh]h)}(h.put work task in per-CPU workqueue after delayh]h.put work task in per-CPU workqueue after delay}(hj<=hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMMhj9=hhubah}(h]h ]h"]h$]h&]uh1jBhj!<hhhj9<hMMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejT=jfjT=jgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct delayed_work *dwork`` job to be done ``unsigned long delay`` number of jiffies to wait or 0 for immediate execution **Description** After waiting for a given time this puts a job in the system per-CPU workqueue.h](h)}(h**Parameters**h]j)}(hj^=h]h Parameters}(hj`=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\=ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMQhjX=ubjg)}(hhh](jl)}(h.``struct delayed_work *dwork`` job to be done h](jr)}(h``struct delayed_work *dwork``h]j)}(hj}=h]hstruct delayed_work *dwork}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{=ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMNhjw=ubj)}(hhh]h)}(hjob to be doneh]hjob to be done}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=hMNhj=ubah}(h]h ]h"]h$]h&]uh1jhjw=ubeh}(h]h ]h"]h$]h&]uh1jkhj=hMNhjt=ubjl)}(hO``unsigned long delay`` number of jiffies to wait or 0 for immediate execution h](jr)}(h``unsigned long delay``h]j)}(hj=h]hunsigned long delay}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=ubah}(h]h ]h"]h$]h&]uh1jqh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMOhj=ubj)}(hhh]h)}(h6number of jiffies to wait or 0 for immediate executionh]h6number of jiffies to wait or 0 for immediate execution}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=hMOhj=ubah}(h]h ]h"]h$]h&]uh1jhj=ubeh}(h]h ]h"]h$]h&]uh1jkhj=hMOhjt=ubeh}(h]h ]h"]h$]h&]uh1jfhjX=ubh)}(h**Description**h]j)}(hj=h]h Description}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=ubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMQhjX=ubh)}(hOAfter waiting for a given time this puts a job in the system per-CPU workqueue.h]hOAfter waiting for a given time this puts a job in the system per-CPU workqueue.}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:793: ./include/linux/workqueue.hhMPhjX=ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jfor_each_pool (C macro)c.for_each_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h for_each_poolh]j)}(h for_each_poolh]j)}(h for_each_poolh]j)}(hj0>h]h for_each_pool}(hj:>hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj6>ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj2>hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM=ubah}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj.>hhhjM>hM=ubah}(h]j)>ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjM>hM=hj+>hhubjC)}(hhh]h}(h]h ]h"]h$]h&]uh1jBhj+>hhhjM>hM=ubeh}(h]h ](j_macroeh"]h$]h&]jdj_jejf>jfjf>jgjhjiuh1jhhhjhNhNubh)}(h``for_each_pool (pool, pi)``h]j)}(hjl>h]hfor_each_pool (pool, pi)}(hjn>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjj>ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM?hjhhubj)}(h/iterate through all worker_pools in the system h]h)}(h.iterate through all worker_pools in the systemh]h.iterate through all worker_pools in the system}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM=hj>ubah}(h]h ]h"]h$]h&]uh1jhj>hM=hjhhubjk)}(hXz**Parameters** ``pool`` iteration cursor ``pi`` integer used for iteration **Description** This must be called either with wq_pool_mutex held or RCU read locked. If the pool needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pool stays online. The if/else clause exists only for the lockdep assertion and can be ignored.h](h)}(h**Parameters**h]j)}(hj>h]h Parameters}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMAhj>ubjg)}(hhh](jl)}(h``pool`` iteration cursor h](jr)}(h``pool``h]j)}(hj>h]hpool}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM>hj>ubj)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj>hM>hj>ubah}(h]h ]h"]h$]h&]uh1jhj>ubeh}(h]h ]h"]h$]h&]uh1jkhj>hM>hj>ubjl)}(h"``pi`` integer used for iteration h](jr)}(h``pi``h]j)}(hj>h]hpi}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM?hj>ubj)}(hhh]h)}(hinteger used for iterationh]hinteger used for iteration}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj?hM?hj?ubah}(h]h ]h"]h$]h&]uh1jhj>ubeh}(h]h ]h"]h$]h&]uh1jkhj?hM?hj>ubeh}(h]h ]h"]h$]h&]uh1jfhj>ubh)}(h**Description**h]j)}(hj4?h]h Description}(hj6?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2?ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMAhj>ubh)}(hThis must be called either with wq_pool_mutex held or RCU read locked. If the pool needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pool stays online.h]hThis must be called either with wq_pool_mutex held or RCU read locked. If the pool needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pool stays online.}(hjJ?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM@hj>ubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hjY?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMDhj>ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jfor_each_pool_worker (C macro)c.for_each_pool_workerhNtauh1jhjhhhNhNubj)}(hhh](j)}(hfor_each_pool_workerh]j)}(hfor_each_pool_workerh]j)}(hfor_each_pool_workerh]j)}(hj?h]hfor_each_pool_worker}(hj?hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj?ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj?hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMNubah}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj?hhhj?hMNubah}(h]j{?ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj?hMNhj}?hhubjC)}(hhh]h}(h]h ]h"]h$]h&]uh1jBhj}?hhhj?hMNubeh}(h]h ](j_macroeh"]h$]h&]jdj_jej?jfj?jgjhjiuh1jhhhjhNhNubh)}(h'``for_each_pool_worker (worker, pool)``h]j)}(hj?h]h#for_each_pool_worker (worker, pool)}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMPhjhhubj)}(h-iterate through all workers of a worker_pool h]h)}(h,iterate through all workers of a worker_poolh]h,iterate through all workers of a worker_pool}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMNhj?ubah}(h]h ]h"]h$]h&]uh1jhj?hMNhjhhubjk)}(h**Parameters** ``worker`` iteration cursor ``pool`` worker_pool to iterate workers of **Description** This must be called with wq_pool_attach_mutex. The if/else clause exists only for the lockdep assertion and can be ignored.h](h)}(h**Parameters**h]j)}(hj?h]h Parameters}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMRhj?ubjg)}(hhh](jl)}(h``worker`` iteration cursor h](jr)}(h ``worker``h]j)}(hj@h]hworker}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMOhj @ubj)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hj+@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj'@hMOhj(@ubah}(h]h ]h"]h$]h&]uh1jhj @ubeh}(h]h ]h"]h$]h&]uh1jkhj'@hMOhj @ubjl)}(h+``pool`` worker_pool to iterate workers of h](jr)}(h``pool``h]j)}(hjK@h]hpool}(hjM@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjI@ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMPhjE@ubj)}(hhh]h)}(h!worker_pool to iterate workers ofh]h!worker_pool to iterate workers of}(hjd@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj`@hMPhja@ubah}(h]h ]h"]h$]h&]uh1jhjE@ubeh}(h]h ]h"]h$]h&]uh1jkhj`@hMPhj @ubeh}(h]h ]h"]h$]h&]uh1jfhj?ubh)}(h**Description**h]j)}(hj@h]h Description}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMRhj?ubh)}(h.This must be called with wq_pool_attach_mutex.h]h.This must be called with wq_pool_attach_mutex.}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMQhj?ubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMShj?ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jfor_each_pwq (C macro)c.for_each_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h for_each_pwqh]j)}(h for_each_pwqh]j)}(h for_each_pwqh]j)}(hj@h]h for_each_pwq}(hj@hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj@ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj@hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM]ubah}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj@hhhj@hM]ubah}(h]j@ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj@hM]hj@hhubjC)}(hhh]h}(h]h ]h"]h$]h&]uh1jBhj@hhhj@hM]ubeh}(h]h ](j_macroeh"]h$]h&]jdj_jej Ajfj Ajgjhjiuh1jhhhjhNhNubh)}(h``for_each_pwq (pwq, wq)``h]j)}(hjAh]hfor_each_pwq (pwq, wq)}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM_hjhhubj)}(h?iterate through all pool_workqueues of the specified workqueue h]h)}(h>iterate through all pool_workqueues of the specified workqueueh]h>iterate through all pool_workqueues of the specified workqueue}(hj*AhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM]hj&Aubah}(h]h ]h"]h$]h&]uh1jhj8AhM]hjhhubjk)}(hXl**Parameters** ``pwq`` iteration cursor ``wq`` the target workqueue **Description** This must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online. The if/else clause exists only for the lockdep assertion and can be ignored.h](h)}(h**Parameters**h]j)}(hjEAh]h Parameters}(hjGAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCAubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMahj?Aubjg)}(hhh](jl)}(h``pwq`` iteration cursor h](jr)}(h``pwq``h]j)}(hjdAh]hpwq}(hjfAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbAubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM^hj^Aubj)}(hhh]h)}(hiteration cursorh]hiteration cursor}(hj}AhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjyAhM^hjzAubah}(h]h ]h"]h$]h&]uh1jhj^Aubeh}(h]h ]h"]h$]h&]uh1jkhjyAhM^hj[Aubjl)}(h``wq`` the target workqueue h](jr)}(h``wq``h]j)}(hjAh]hwq}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM_hjAubj)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjAhM_hjAubah}(h]h ]h"]h$]h&]uh1jhjAubeh}(h]h ]h"]h$]h&]uh1jkhjAhM_hj[Aubeh}(h]h ]h"]h$]h&]uh1jfhj?Aubh)}(h**Description**h]j)}(hjAh]h Description}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMahj?Aubh)}(hThis must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online.h]hThis must be called either with wq->mutex held or RCU read locked. If the pwq needs to be used beyond the locking in effect, the caller is responsible for guaranteeing that the pwq stays online.}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM`hj?Aubh)}(hLThe if/else clause exists only for the lockdep assertion and can be ignored.h]hLThe if/else clause exists only for the lockdep assertion and can be ignored.}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMdhj?Aubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"worker_pool_assign_id (C function)c.worker_pool_assign_idhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4int worker_pool_assign_id (struct worker_pool *pool)h]j)}(h3int worker_pool_assign_id(struct worker_pool *pool)h](j)}(hinth]hint}(hj,BhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(BhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj;BhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj(Bhhhj:BhMubj)}(hworker_pool_assign_idh]j)}(hworker_pool_assign_idh]hworker_pool_assign_id}(hjMBhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjIBubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj(Bhhhj:BhMubj|)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjiBhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjeBubj)}(h h]h }(hjvBhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjeBubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjBhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjBubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjBmodnameN classnameNj7j:)}j=]j@)}j3jOBsbc.worker_pool_assign_idasbuh1hhjeBubj)}(h h]h }(hjBhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjeBubjU)}(hjuh]h*}(hjBhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjeBubj)}(hpoolh]hpool}(hjBhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjeBubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjaBubah}(h]h ]h"]h$]h&]jjuh1j{hj(Bhhhj:BhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj$Bhhhj:BhMubah}(h]jBah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj:BhMhj!BhhubjC)}(hhh]h)}(h%allocate ID and assign it to **pool**h](hallocate ID and assign it to }(hjBhhhNhNubj)}(h**pool**h]hpool}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjBhhubah}(h]h ]h"]h$]h&]uh1jBhj!Bhhhj:BhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejCjfjCjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct worker_pool *pool`` the pool pointer of interest **Description** Returns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.h](h)}(h**Parameters**h]j)}(hjCh]h Parameters}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjCubjg)}(hhh]jl)}(h:``struct worker_pool *pool`` the pool pointer of interest h](jr)}(h``struct worker_pool *pool``h]j)}(hj;Ch]hstruct worker_pool *pool}(hj=ChhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9Cubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj5Cubj)}(hhh]h)}(hthe pool pointer of interesth]hthe pool pointer of interest}(hjTChhhNhNubah}(h]h ]h"]h$]h&]uh1hhjPChMhjQCubah}(h]h ]h"]h$]h&]uh1jhj5Cubeh}(h]h ]h"]h$]h&]uh1jkhjPChMhj2Cubah}(h]h ]h"]h$]h&]uh1jfhjCubh)}(h**Description**h]j)}(hjvCh]h Description}(hjxChhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtCubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjCubh)}(hfReturns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.h]hfReturns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned successfully, -errno on failure.}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjCubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&unbound_effective_cpumask (C function)c.unbound_effective_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(hHstruct cpumask * unbound_effective_cpumask (struct workqueue_struct *wq)h]j)}(hFstruct cpumask *unbound_effective_cpumask(struct workqueue_struct *wq)h](j)}(hjh]hstruct}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjChhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjChhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjChhhjChMubh)}(hhh]j)}(hcpumaskh]hcpumask}(hjChhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjCmodnameN classnameNj7j:)}j=]j@)}j3unbound_effective_cpumasksbc.unbound_effective_cpumaskasbuh1hhjChhhjChMubj)}(h h]h }(hjChhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjChhhjChMubjU)}(hjuh]h*}(hj DhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjChhhjChMubj)}(hunbound_effective_cpumaskh]j)}(hjCh]hunbound_effective_cpumask}(hjDhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjDubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjChhhjChMubj|)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj5DhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1Dubj)}(h h]h }(hjBDhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1Dubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjSDhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjPDubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjUDmodnameN classnameNj7j:)}j=]jCc.unbound_effective_cpumaskasbuh1hhj1Dubj)}(h h]h }(hjqDhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1DubjU)}(hjuh]h*}(hjDhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj1Dubj)}(hwqh]hwq}(hjDhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj1Dubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj-Dubah}(h]h ]h"]h$]h&]jjuh1j{hjChhhjChMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjChhhjChMubah}(h]jCah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjChMhjChhubjC)}(hhh]h)}(h)effective cpumask of an unbound workqueueh]h)effective cpumask of an unbound workqueue}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjDhhubah}(h]h ]h"]h$]h&]uh1jBhjChhhjChMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejDjfjDjgjhjiuh1jhhhjhNhNubjk)}(hX@**Parameters** ``struct workqueue_struct *wq`` workqueue of interest **Description** **wq->unbound_attrs->cpumask** contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.h](h)}(h**Parameters**h]j)}(hjDh]h Parameters}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjDubjg)}(hhh]jl)}(h6``struct workqueue_struct *wq`` workqueue of interest h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjDh]hstruct workqueue_struct *wq}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjDubj)}(hhh]h)}(hworkqueue of interesth]hworkqueue of interest}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj EhMhj Eubah}(h]h ]h"]h$]h&]uh1jhjDubeh}(h]h ]h"]h$]h&]uh1jkhj EhMhjDubah}(h]h ]h"]h$]h&]uh1jfhjDubh)}(h**Description**h]j)}(hj2Eh]h Description}(hj4EhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0Eubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjDubh)}(h**wq->unbound_attrs->cpumask** contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.h](j)}(h**wq->unbound_attrs->cpumask**h]hwq->unbound_attrs->cpumask}(hjLEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHEubh contains the cpumask requested by the user which is masked with wq_unbound_cpumask to determine the effective cpumask. The default pwq is always mapped to the pool with the current effective cpumask.}(hjHEhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjDubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jget_work_pool (C function)c.get_work_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h=struct worker_pool * get_work_pool (struct work_struct *work)h]j)}(h;struct worker_pool *get_work_pool(struct work_struct *work)h](j)}(hjh]hstruct}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMsubj)}(h h]h }(hjEhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjEhhhjEhMsubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjEhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjEmodnameN classnameNj7j:)}j=]j@)}j3 get_work_poolsbc.get_work_poolasbuh1hhjEhhhjEhMsubj)}(h h]h }(hjEhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjEhhhjEhMsubjU)}(hjuh]h*}(hjEhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjEhhhjEhMsubj)}(h get_work_poolh]j)}(hjEh]h get_work_pool}(hjEhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjEubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjEhhhjEhMsubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubj)}(h h]h }(hj FhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjEubh)}(hhh]j)}(h work_structh]h work_struct}(hjFhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjFmodnameN classnameNj7j:)}j=]jEc.get_work_poolasbuh1hhjEubj)}(h h]h }(hj;FhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjEubjU)}(hjuh]h*}(hjIFhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjEubj)}(hworkh]hwork}(hjVFhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjEubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjEubah}(h]h ]h"]h$]h&]jjuh1j{hjEhhhjEhMsubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj}EhhhjEhMsubah}(h]jxEah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjEhMshjzEhhubjC)}(hhh]h)}(h7return the worker_pool a given work was associated withh]h7return the worker_pool a given work was associated with}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMshj}Fhhubah}(h]h ]h"]h$]h&]uh1jBhjzEhhhjEhMsubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejFjfjFjgjhjiuh1jhhhjhNhNubjk)}(hXi**Parameters** ``struct work_struct *work`` the work item of interest **Description** Pools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region. All fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online. **Return** The worker_pool **work** was last associated with. ``NULL`` if none.h](h)}(h**Parameters**h]j)}(hjFh]h Parameters}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMwhjFubjg)}(hhh]jl)}(h7``struct work_struct *work`` the work item of interest h](jr)}(h``struct work_struct *work``h]j)}(hjFh]hstruct work_struct *work}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMthjFubj)}(hhh]h)}(hthe work item of interesth]hthe work item of interest}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhMthjFubah}(h]h ]h"]h$]h&]uh1jhjFubeh}(h]h ]h"]h$]h&]uh1jkhjFhMthjFubah}(h]h ]h"]h$]h&]uh1jfhjFubh)}(h**Description**h]j)}(hjFh]h Description}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMvhjFubh)}(hPools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region.h]hPools are created and destroyed under wq_pool_mutex, and allows read access under RCU read lock. As such, this function should be called under wq_pool_mutex or inside of a rcu_read_lock() region.}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMuhjFubh)}(hAll fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online.h]hAll fields of the returned pool are accessible as long as the above mentioned locking is in effect. If the returned pool needs to be used beyond the critical section, the caller is responsible for ensuring the returned pool is and stays online.}(hj!GhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMyhjFubh)}(h **Return**h]j)}(hj2Gh]hReturn}(hj4GhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0Gubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM~hjFubh)}(hEThe worker_pool **work** was last associated with. ``NULL`` if none.h](hThe worker_pool }(hjHGhhhNhNubj)}(h**work**h]hwork}(hjPGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHGubh was last associated with. }(hjHGhhhNhNubj)}(h``NULL``h]hNULL}(hjbGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHGubh if none.}(hjHGhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjFubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_set_flags (C function)c.worker_set_flagshNtauh1jhjhhhNhNubj)}(hhh](j)}(hAvoid worker_set_flags (struct worker *worker, unsigned int flags)h]j)}(h@void worker_set_flags(struct worker *worker, unsigned int flags)h](j)}(hvoidh]hvoid}(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjGhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjGhhhjGhMubj)}(hworker_set_flagsh]j)}(hworker_set_flagsh]hworker_set_flags}(hjGhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjGubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjGhhhjGhMubj|)}(h+(struct worker *worker, unsigned int flags)h](j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hjGhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjGubj)}(h h]h }(hjGhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjGubh)}(hhh]j)}(hworkerh]hworker}(hjGhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjGubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjGmodnameN classnameNj7j:)}j=]j@)}j3jGsbc.worker_set_flagsasbuh1hhjGubj)}(h h]h }(hjHhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjGubjU)}(hjuh]h*}(hj$HhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjGubj)}(hworkerh]hworker}(hj1HhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjGubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubj)}(hunsigned int flagsh](j)}(hunsignedh]hunsigned}(hjJHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFHubj)}(h h]h }(hjXHhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjFHubj)}(hinth]hint}(hjfHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFHubj)}(h h]h }(hjtHhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjFHubj)}(hflagsh]hflags}(hjHhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjFHubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubeh}(h]h ]h"]h$]h&]jjuh1j{hjGhhhjGhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjGhhhjGhMubah}(h]jGah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjGhMhjGhhubjC)}(hhh]h)}(h2set worker flags and adjust nr_running accordinglyh]h2set worker flags and adjust nr_running accordingly}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjHhhubah}(h]h ]h"]h$]h&]uh1jBhjGhhhjGhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejHjfjHjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct worker *worker`` self ``unsigned int flags`` flags to set **Description** Set **flags** in **worker->flags** and adjust nr_running accordingly.h](h)}(h**Parameters**h]j)}(hjHh]h Parameters}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjHubjg)}(hhh](jl)}(h``struct worker *worker`` self h](jr)}(h``struct worker *worker``h]j)}(hjHh]hstruct worker *worker}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjHubj)}(hhh]h)}(hselfh]hself}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIhMhjIubah}(h]h ]h"]h$]h&]uh1jhjHubeh}(h]h ]h"]h$]h&]uh1jkhjIhMhjHubjl)}(h$``unsigned int flags`` flags to set h](jr)}(h``unsigned int flags``h]j)}(hj&Ih]hunsigned int flags}(hj(IhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$Iubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj Iubj)}(hhh]h)}(h flags to seth]h flags to set}(hj?IhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj;IhMhjflags** and adjust nr_running accordingly.h](hSet }(hjwIhhhNhNubj)}(h **flags**h]hflags}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwIubh in }(hjwIhhhNhNubj)}(h**worker->flags**h]h worker->flags}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwIubh# and adjust nr_running accordingly.}(hjwIhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjHubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_clr_flags (C function)c.worker_clr_flagshNtauh1jhjhhhNhNubj)}(hhh](j)}(hAvoid worker_clr_flags (struct worker *worker, unsigned int flags)h]j)}(h@void worker_clr_flags(struct worker *worker, unsigned int flags)h](j)}(hvoidh]hvoid}(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjIhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjIhhhjIhMubj)}(hworker_clr_flagsh]j)}(hworker_clr_flagsh]hworker_clr_flags}(hjIhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjIubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjIhhhjIhMubj|)}(h+(struct worker *worker, unsigned int flags)h](j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjJubj)}(h h]h }(hjJhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjJubh)}(hhh]j)}(hworkerh]hworker}(hj%JhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj"Jubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj'JmodnameN classnameNj7j:)}j=]j@)}j3jIsbc.worker_clr_flagsasbuh1hhjJubj)}(h h]h }(hjEJhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjJubjU)}(hjuh]h*}(hjSJhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjJubj)}(hworkerh]hworker}(hj`JhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjJubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjIubj)}(hunsigned int flagsh](j)}(hunsignedh]hunsigned}(hjyJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjuJubj)}(h h]h }(hjJhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjuJubj)}(hinth]hint}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjuJubj)}(h h]h }(hjJhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjuJubj)}(hflagsh]hflags}(hjJhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjuJubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjIubeh}(h]h ]h"]h$]h&]jjuh1j{hjIhhhjIhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjIhhhjIhMubah}(h]jIah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjIhMhjIhhubjC)}(hhh]h)}(h4clear worker flags and adjust nr_running accordinglyh]h4clear worker flags and adjust nr_running accordingly}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjJhhubah}(h]h ]h"]h$]h&]uh1jBhjIhhhjIhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejJjfjJjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct worker *worker`` self ``unsigned int flags`` flags to clear **Description** Clear **flags** in **worker->flags** and adjust nr_running accordingly.h](h)}(h**Parameters**h]j)}(hjJh]h Parameters}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjJubjg)}(hhh](jl)}(h``struct worker *worker`` self h](jr)}(h``struct worker *worker``h]j)}(hjKh]hstruct worker *worker}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjKubj)}(hhh]h)}(hselfh]hself}(hj5KhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj1KhMhj2Kubah}(h]h ]h"]h$]h&]uh1jhjKubeh}(h]h ]h"]h$]h&]uh1jkhj1KhMhjKubjl)}(h&``unsigned int flags`` flags to clear h](jr)}(h``unsigned int flags``h]j)}(hjUKh]hunsigned int flags}(hjWKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSKubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjOKubj)}(hhh]h)}(hflags to clearh]hflags to clear}(hjnKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjjKhMhjkKubah}(h]h ]h"]h$]h&]uh1jhjOKubeh}(h]h ]h"]h$]h&]uh1jkhjjKhMhjKubeh}(h]h ]h"]h$]h&]uh1jfhjJubh)}(h**Description**h]j)}(hjKh]h Description}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjJubh)}(hGClear **flags** in **worker->flags** and adjust nr_running accordingly.h](hClear }(hjKhhhNhNubj)}(h **flags**h]hflags}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubh in }(hjKhhhNhNubj)}(h**worker->flags**h]h worker->flags}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubh# and adjust nr_running accordingly.}(hjKhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjJubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_enter_idle (C function)c.worker_enter_idlehNtauh1jhjhhhNhNubj)}(hhh](j)}(h.void worker_enter_idle (struct worker *worker)h]j)}(h-void worker_enter_idle(struct worker *worker)h](j)}(hvoidh]hvoid}(hjKhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjLhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjKhhhjLhMubj)}(hworker_enter_idleh]j)}(hworker_enter_idleh]hworker_enter_idle}(hjLhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjLubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjKhhhjLhMubj|)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hj6LhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2Lubj)}(h h]h }(hjCLhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj2Lubh)}(hhh]j)}(hworkerh]hworker}(hjTLhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjQLubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjVLmodnameN classnameNj7j:)}j=]j@)}j3jLsbc.worker_enter_idleasbuh1hhj2Lubj)}(h h]h }(hjtLhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj2LubjU)}(hjuh]h*}(hjLhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj2Lubj)}(hworkerh]hworker}(hjLhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj2Lubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj.Lubah}(h]h ]h"]h$]h&]jjuh1j{hjKhhhjLhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjKhhhjLhMubah}(h]jKah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjLhMhjKhhubjC)}(hhh]h)}(henter idle stateh]henter idle state}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjLhhubah}(h]h ]h"]h$]h&]uh1jBhjKhhhjLhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejLjfjLjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct worker *worker`` worker which is entering idle state **Description** **worker** is entering idle state. Update stats and idle timer if necessary. LOCKING: raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjLh]h Parameters}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjLubjg)}(hhh]jl)}(h>``struct worker *worker`` worker which is entering idle state h](jr)}(h``struct worker *worker``h]j)}(hjLh]hstruct worker *worker}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjLubj)}(hhh]h)}(h#worker which is entering idle stateh]h#worker which is entering idle state}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjMhMhjMubah}(h]h ]h"]h$]h&]uh1jhjLubeh}(h]h ]h"]h$]h&]uh1jkhjMhMhjLubah}(h]h ]h"]h$]h&]uh1jfhjLubh)}(h**Description**h]j)}(hj5Mh]h Description}(hj7MhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3Mubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjLubh)}(hM**worker** is entering idle state. Update stats and idle timer if necessary.h](j)}(h **worker**h]hworker}(hjOMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKMubhC is entering idle state. Update stats and idle timer if necessary.}(hjKMhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjLubh)}(h'LOCKING: raw_spin_lock_irq(pool->lock).h]h'LOCKING: raw_spin_lock_irq(pool->lock).}(hjhMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjLubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_leave_idle (C function)c.worker_leave_idlehNtauh1jhjhhhNhNubj)}(hhh](j)}(h.void worker_leave_idle (struct worker *worker)h]j)}(h-void worker_leave_idle(struct worker *worker)h](j)}(hvoidh]hvoid}(hjMhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM3ubj)}(h h]h }(hjMhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjMhhhjMhM3ubj)}(hworker_leave_idleh]j)}(hworker_leave_idleh]hworker_leave_idle}(hjMhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjMubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjMhhhjMhM3ubj|)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hjMhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMubj)}(h h]h }(hjMhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjMubh)}(hhh]j)}(hworkerh]hworker}(hjMhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjMmodnameN classnameNj7j:)}j=]j@)}j3jMsbc.worker_leave_idleasbuh1hhjMubj)}(h h]h }(hjNhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjMubjU)}(hjuh]h*}(hj NhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjMubj)}(hworkerh]hworker}(hj-NhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjMubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjMubah}(h]h ]h"]h$]h&]jjuh1j{hjMhhhjMhM3ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjMhhhjMhM3ubah}(h]jMah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjMhM3hjMhhubjC)}(hhh]h)}(hleave idle stateh]hleave idle state}(hjWNhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM3hjTNhhubah}(h]h ]h"]h$]h&]uh1jBhjMhhhjMhM3ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejoNjfjoNjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct worker *worker`` worker which is leaving idle state **Description** **worker** is leaving idle state. Update stats. LOCKING: raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjyNh]h Parameters}(hj{NhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM7hjsNubjg)}(hhh]jl)}(h=``struct worker *worker`` worker which is leaving idle state h](jr)}(h``struct worker *worker``h]j)}(hjNh]hstruct worker *worker}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM4hjNubj)}(hhh]h)}(h"worker which is leaving idle stateh]h"worker which is leaving idle state}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjNhM4hjNubah}(h]h ]h"]h$]h&]uh1jhjNubeh}(h]h ]h"]h$]h&]uh1jkhjNhM4hjNubah}(h]h ]h"]h$]h&]uh1jfhjsNubh)}(h**Description**h]j)}(hjNh]h Description}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM6hjsNubh)}(h0**worker** is leaving idle state. Update stats.h](j)}(h **worker**h]hworker}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubh& is leaving idle state. Update stats.}(hjNhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM5hjsNubh)}(h'LOCKING: raw_spin_lock_irq(pool->lock).h]h'LOCKING: raw_spin_lock_irq(pool->lock).}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM7hjsNubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j'find_worker_executing_work (C function)c.find_worker_executing_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h_struct worker * find_worker_executing_work (struct worker_pool *pool, struct work_struct *work)h]j)}(h]struct worker *find_worker_executing_work(struct worker_pool *pool, struct work_struct *work)h](j)}(hjh]hstruct}(hj5OhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1OhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMGubj)}(h h]h }(hjCOhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1OhhhjBOhMGubh)}(hhh]j)}(hworkerh]hworker}(hjTOhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjQOubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjVOmodnameN classnameNj7j:)}j=]j@)}j3find_worker_executing_worksbc.find_worker_executing_workasbuh1hhj1OhhhjBOhMGubj)}(h h]h }(hjuOhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1OhhhjBOhMGubjU)}(hjuh]h*}(hjOhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj1OhhhjBOhMGubj)}(hfind_worker_executing_workh]j)}(hjrOh]hfind_worker_executing_work}(hjOhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjOubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj1OhhhjBOhMGubj|)}(h4(struct worker_pool *pool, struct work_struct *work)h](j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjOubj)}(h h]h }(hjOhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjOubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjOhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjOmodnameN classnameNj7j:)}j=]jpOc.find_worker_executing_workasbuh1hhjOubj)}(h h]h }(hjOhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjOubjU)}(hjuh]h*}(hjOhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjOubj)}(hpoolh]hpool}(hjPhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjOubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjOubj)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjPhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjPubj)}(h h]h }(hj,PhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjPubh)}(hhh]j)}(h work_structh]h work_struct}(hj=PhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj:Pubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj?PmodnameN classnameNj7j:)}j=]jpOc.find_worker_executing_workasbuh1hhjPubj)}(h h]h }(hj[PhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjPubjU)}(hjuh]h*}(hjiPhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjPubj)}(hworkh]hwork}(hjvPhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjPubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjOubeh}(h]h ]h"]h$]h&]jjuh1j{hj1OhhhjBOhMGubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj-OhhhjBOhMGubah}(h]j(Oah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjBOhMGhj*OhhubjC)}(hhh]h)}(h%find worker which is executing a workh]h%find worker which is executing a work}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMGhjPhhubah}(h]h ]h"]h$]h&]uh1jBhj*OhhhjBOhMGubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejPjfjPjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct worker_pool *pool`` pool of interest ``struct work_struct *work`` work to find worker for **Description** Find a worker which is executing **work** on **pool** by searching **pool->busy_hash** which is keyed by the address of **work**. For a worker to match, its current execution should match the address of **work** and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed. This is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency. This function checks the work item address and work function to avoid false positives. Note that this isn't complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there's only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function. **Context** raw_spin_lock_irq(pool->lock). **Return** Pointer to worker which is executing **work** if found, ``NULL`` otherwise.h](h)}(h**Parameters**h]j)}(hjPh]h Parameters}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMKhjPubjg)}(hhh](jl)}(h.``struct worker_pool *pool`` pool of interest h](jr)}(h``struct worker_pool *pool``h]j)}(hjPh]hstruct worker_pool *pool}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMHhjPubj)}(hhh]h)}(hpool of interesth]hpool of interest}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjPhMHhjPubah}(h]h ]h"]h$]h&]uh1jhjPubeh}(h]h ]h"]h$]h&]uh1jkhjPhMHhjPubjl)}(h5``struct work_struct *work`` work to find worker for h](jr)}(h``struct work_struct *work``h]j)}(hjQh]hstruct work_struct *work}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMIhjQubj)}(hhh]h)}(hwork to find worker forh]hwork to find worker for}(hj3QhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/QhMIhj0Qubah}(h]h ]h"]h$]h&]uh1jhjQubeh}(h]h ]h"]h$]h&]uh1jkhj/QhMIhjPubeh}(h]h ]h"]h$]h&]uh1jfhjPubh)}(h**Description**h]j)}(hjUQh]h Description}(hjWQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSQubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMKhjPubh)}(hXrFind a worker which is executing **work** on **pool** by searching **pool->busy_hash** which is keyed by the address of **work**. For a worker to match, its current execution should match the address of **work** and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed.h](h!Find a worker which is executing }(hjkQhhhNhNubj)}(h**work**h]hwork}(hjsQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkQubh on }(hjkQhhhNhNubj)}(h**pool**h]hpool}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkQubh by searching }(hjkQhhhNhNubj)}(h**pool->busy_hash**h]hpool->busy_hash}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkQubh" which is keyed by the address of }(hjkQhhhNhNubj)}(h**work**h]hwork}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkQubhL. For a worker to match, its current execution should match the address of }(hjkQhhhNhNubj)}(h**work**h]hwork}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkQubh and its work function. This is to avoid unwanted dependency between unrelated work executions through a work item being recycled while still being executed.}(hjkQhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMJhjPubh)}(hXThis is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency.h]hXThis is a bit tricky. A work item may be freed once its execution starts and nothing prevents the freed area from being recycled for another work item. If the same work item address ends up being reused before the original execution finishes, workqueue will identify the recycled work item as currently executing and make it wait until the current execution finishes, introducing an unwanted dependency.}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMQhjPubh)}(hXThis function checks the work item address and work function to avoid false positives. Note that this isn't complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there's only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function.h]hXThis function checks the work item address and work function to avoid false positives. Note that this isn’t complete as one may construct a work function which can introduce dependency onto itself through a recycled work item. Well, if somebody wants to shoot oneself in the foot that badly, there’s only so much we can do, and if such deadlock actually occurs, it should be easy to locate the culprit work function.}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMXhjPubh)}(h **Context**h]j)}(hjQh]hContext}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM_hjPubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hj RhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM`hjPubh)}(h **Return**h]j)}(hjRh]hReturn}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMbhjPubh)}(hKPointer to worker which is executing **work** if found, ``NULL`` otherwise.h](h%Pointer to worker which is executing }(hj1RhhhNhNubj)}(h**work**h]hwork}(hj9RhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1Rubh if found, }(hj1RhhhNhNubj)}(h``NULL``h]hNULL}(hjKRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1Rubh otherwise.}(hj1RhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMchjPubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jmove_linked_works (C function)c.move_linked_workshNtauh1jhjhhhNhNubj)}(hhh](j)}(hevoid move_linked_works (struct work_struct *work, struct list_head *head, struct work_struct **nextp)h]j)}(hdvoid move_linked_works(struct work_struct *work, struct list_head *head, struct work_struct **nextp)h](j)}(hvoidh]hvoid}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM|ubj)}(h h]h }(hjRhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjRhhhjRhM|ubj)}(hmove_linked_worksh]j)}(hmove_linked_worksh]hmove_linked_works}(hjRhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjRubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjRhhhjRhM|ubj|)}(hN(struct work_struct *work, struct list_head *head, struct work_struct **nextp)h](j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjRhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubj)}(h h]h }(hjRhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjRubh)}(hhh]j)}(h work_structh]h work_struct}(hjRhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjRmodnameN classnameNj7j:)}j=]j@)}j3jRsbc.move_linked_worksasbuh1hhjRubj)}(h h]h }(hjRhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjRubjU)}(hjuh]h*}(hj ShhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjRubj)}(hworkh]hwork}(hjShhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjRubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjRubj)}(hstruct list_head *headh](j)}(hjh]hstruct}(hj3ShhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/Subj)}(h h]h }(hj@ShhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/Subh)}(hhh]j)}(h list_headh]h list_head}(hjQShhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjNSubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjSSmodnameN classnameNj7j:)}j=]jRc.move_linked_worksasbuh1hhj/Subj)}(h h]h }(hjoShhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/SubjU)}(hjuh]h*}(hj}ShhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj/Subj)}(hheadh]hhead}(hjShhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj/Subeh}(h]h ]h"]h$]h&]noemphjjuh1jhjRubj)}(hstruct work_struct **nextph](j)}(hjh]hstruct}(hjShhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjSubj)}(h h]h }(hjShhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjSubh)}(hhh]j)}(h work_structh]h work_struct}(hjShhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjSmodnameN classnameNj7j:)}j=]jRc.move_linked_worksasbuh1hhjSubj)}(h h]h }(hjShhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjSubjU)}(hjuh]h*}(hjShhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjSubjU)}(hjuh]h*}(hjShhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjSubj)}(hnextph]hnextp}(hjThhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjSubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjRubeh}(h]h ]h"]h$]h&]jjuh1j{hjRhhhjRhM|ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj|RhhhjRhM|ubah}(h]jwRah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjRhM|hjyRhhubjC)}(hhh]h)}(hmove linked works to a listh]hmove linked works to a list}(hj1ThhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM|hj.Thhubah}(h]h ]h"]h$]h&]uh1jBhjyRhhhjRhM|ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejITjfjITjgjhjiuh1jhhhjhNhNubjk)}(hX **Parameters** ``struct work_struct *work`` start of series of works to be scheduled ``struct list_head *head`` target list to append **work** to ``struct work_struct **nextp`` out parameter for nested worklist walking **Description** Schedule linked works starting from **work** to **head**. Work series to be scheduled starts at **work** and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on **nextp**. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjSTh]h Parameters}(hjUThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjQTubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjMTubjg)}(hhh](jl)}(hF``struct work_struct *work`` start of series of works to be scheduled h](jr)}(h``struct work_struct *work``h]j)}(hjrTh]hstruct work_struct *work}(hjtThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpTubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM}hjlTubj)}(hhh]h)}(h(start of series of works to be scheduledh]h(start of series of works to be scheduled}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhjThM}hjTubah}(h]h ]h"]h$]h&]uh1jhjlTubeh}(h]h ]h"]h$]h&]uh1jkhjThM}hjiTubjl)}(h=``struct list_head *head`` target list to append **work** to h](jr)}(h``struct list_head *head``h]j)}(hjTh]hstruct list_head *head}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM~hjTubj)}(hhh]h)}(h!target list to append **work** toh](htarget list to append }(hjThhhNhNubj)}(h**work**h]hwork}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubh to}(hjThhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjThM~hjTubah}(h]h ]h"]h$]h&]uh1jhjTubeh}(h]h ]h"]h$]h&]uh1jkhjThM~hjiTubjl)}(hI``struct work_struct **nextp`` out parameter for nested worklist walking h](jr)}(h``struct work_struct **nextp``h]j)}(hjTh]hstruct work_struct **nextp}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjTubj)}(hhh]h)}(h)out parameter for nested worklist walkingh]h)out parameter for nested worklist walking}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj UhMhj Uubah}(h]h ]h"]h$]h&]uh1jhjTubeh}(h]h ]h"]h$]h&]uh1jkhj UhMhjiTubeh}(h]h ]h"]h$]h&]uh1jfhjMTubh)}(h**Description**h]j)}(hj1Uh]h Description}(hj3UhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/Uubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjMTubh)}(hSchedule linked works starting from **work** to **head**. Work series to be scheduled starts at **work** and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on **nextp**.h](h$Schedule linked works starting from }(hjGUhhhNhNubj)}(h**work**h]hwork}(hjOUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjGUubh to }(hjGUhhhNhNubj)}(h**head**h]hhead}(hjaUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjGUubh(. Work series to be scheduled starts at }(hjGUhhhNhNubj)}(h**work**h]hwork}(hjsUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjGUubht and includes any consecutive work with WORK_STRUCT_LINKED set in its predecessor. See assign_work() for details on }(hjGUhhhNhNubj)}(h **nextp**h]hnextp}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjGUubh.}(hjGUhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjMTubh)}(h **Context**h]j)}(hjUh]hContext}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjMTubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjMTubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jassign_work (C function) c.assign_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h^bool assign_work (struct work_struct *work, struct worker *worker, struct work_struct **nextp)h]j)}(h]bool assign_work(struct work_struct *work, struct worker *worker, struct work_struct **nextp)h](j)}(hj*h]hbool}(hjUhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjUhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjUhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjUhhhjUhMubj)}(h assign_workh]j)}(h assign_workh]h assign_work}(hjVhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjVubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjUhhhjUhMubj|)}(hM(struct work_struct *work, struct worker *worker, struct work_struct **nextp)h](j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hj!VhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjVubj)}(h h]h }(hj.VhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjVubh)}(hhh]j)}(h work_structh]h work_struct}(hj?VhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjXubj)}(hhh]h)}(h)out parameter for nested worklist walkingh]h)out parameter for nested worklist walking}(hj]XhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjYXhMhjZXubah}(h]h ]h"]h$]h&]uh1jhj>Xubeh}(h]h ]h"]h$]h&]uh1jkhjYXhMhjWubeh}(h]h ]h"]h$]h&]uh1jfhjWubh)}(h**Description**h]j)}(hjXh]h Description}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}Xubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjWubh)}(hAssign **work** and its linked work items to **worker**. If **work** is already being executed by another worker in the same pool, it'll be punted there.h](hAssign }(hjXhhhNhNubj)}(h**work**h]hwork}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh and its linked work items to }(hjXhhhNhNubj)}(h **worker**h]hworker}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh. If }(hjXhhhNhNubj)}(h**work**h]hwork}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubhW is already being executed by another worker in the same pool, it’ll be punted there.}(hjXhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjWubh)}(hIf **nextp** is not NULL, it's updated to point to the next work of the last scheduled work. This allows assign_work() to be nested inside list_for_each_entry_safe().h](hIf }(hjXhhhNhNubj)}(h **nextp**h]hnextp}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh is not NULL, it’s updated to point to the next work of the last scheduled work. This allows assign_work() to be nested inside list_for_each_entry_safe().}(hjXhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjWubh)}(hReturns ``true`` if **work** was successfully assigned to **worker**. ``false`` if **work** was punted to another worker already executing it.h](hReturns }(hjXhhhNhNubj)}(h``true``h]htrue}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh if }(hjXhhhNhNubj)}(h**work**h]hwork}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh was successfully assigned to }(hjXhhhNhNubj)}(h **worker**h]hworker}(hj'YhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh. }(hjXhhhNhNubj)}(h ``false``h]hfalse}(hj9YhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh if }hjXsbj)}(h**work**h]hwork}(hjKYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh3 was punted to another worker already executing it.}(hjXhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjWubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jkick_pool (C function) c.kick_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h)bool kick_pool (struct worker_pool *pool)h]j)}(h(bool kick_pool(struct worker_pool *pool)h](j)}(hj*h]hbool}(hjYhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjYhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjYhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjYhhhjYhMubj)}(h kick_poolh]j)}(h kick_poolh]h kick_pool}(hjYhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjYubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjYhhhjYhMubj|)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjYhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjYubj)}(h h]h }(hjYhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjYubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjYhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjYubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjYmodnameN classnameNj7j:)}j=]j@)}j3jYsb c.kick_poolasbuh1hhjYubj)}(h h]h }(hjYhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjYubjU)}(hjuh]h*}(hj ZhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjYubj)}(hpoolh]hpool}(hjZhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjYubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjYubah}(h]h ]h"]h$]h&]jjuh1j{hjYhhhjYhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj|YhhhjYhMubah}(h]jwYah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjYhMhjyYhhubjC)}(hhh]h)}(h#wake up an idle worker if necessaryh]h#wake up an idle worker if necessary}(hjCZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj@Zhhubah}(h]h ]h"]h$]h&]uh1jBhjyYhhhjYhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej[Zjfj[Zjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct worker_pool *pool`` pool to kick **Description** **pool** may have pending work items. Wake up worker if necessary. Returns whether a worker was woken up.h](h)}(h**Parameters**h]j)}(hjeZh]h Parameters}(hjgZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcZubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj_Zubjg)}(hhh]jl)}(h*``struct worker_pool *pool`` pool to kick h](jr)}(h``struct worker_pool *pool``h]j)}(hjZh]hstruct worker_pool *pool}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj~Zubj)}(hhh]h)}(h pool to kickh]h pool to kick}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjZhMhjZubah}(h]h ]h"]h$]h&]uh1jhj~Zubeh}(h]h ]h"]h$]h&]uh1jkhjZhMhj{Zubah}(h]h ]h"]h$]h&]uh1jfhj_Zubh)}(h**Description**h]j)}(hjZh]h Description}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj_Zubh)}(hi**pool** may have pending work items. Wake up worker if necessary. Returns whether a worker was woken up.h](j)}(h**pool**h]hpool}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZubha may have pending work items. Wake up worker if necessary. Returns whether a worker was woken up.}(hjZhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj_Zubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwq_worker_running (C function)c.wq_worker_runninghNtauh1jhjhhhNhNubj)}(hhh](j)}(h1void wq_worker_running (struct task_struct *task)h]j)}(h0void wq_worker_running(struct task_struct *task)h](j)}(hvoidh]hvoid}(hj[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj[hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj![hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj[hhhj [hMubj)}(hwq_worker_runningh]j)}(hwq_worker_runningh]hwq_worker_running}(hj3[hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj/[ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj[hhhj [hMubj|)}(h(struct task_struct *task)h]j)}(hstruct task_struct *taskh](j)}(hjh]hstruct}(hjO[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjK[ubj)}(h h]h }(hj\[hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjK[ubh)}(hhh]j)}(h task_structh]h task_struct}(hjm[hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjj[ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjo[modnameN classnameNj7j:)}j=]j@)}j3j5[sbc.wq_worker_runningasbuh1hhjK[ubj)}(h h]h }(hj[hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjK[ubjU)}(hjuh]h*}(hj[hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjK[ubj)}(htaskh]htask}(hj[hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjK[ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjG[ubah}(h]h ]h"]h$]h&]jjuh1j{hj[hhhj [hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj [hhhj [hMubah}(h]j[ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj [hMhj[hhubjC)}(hhh]h)}(ha worker is running againh]ha worker is running again}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj[hhubah}(h]h ]h"]h$]h&]uh1jBhj[hhhj [hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej[jfj[jgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct task_struct *task`` task waking up **Description** This function is called when a worker returns from schedule()h](h)}(h**Parameters**h]j)}(hj[h]h Parameters}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj[ubjg)}(hhh]jl)}(h,``struct task_struct *task`` task waking up h](jr)}(h``struct task_struct *task``h]j)}(hj\h]hstruct task_struct *task}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj \ubj)}(hhh]h)}(htask waking uph]htask waking up}(hj,\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj(\hMhj)\ubah}(h]h ]h"]h$]h&]uh1jhj \ubeh}(h]h ]h"]h$]h&]uh1jkhj(\hMhj \ubah}(h]h ]h"]h$]h&]uh1jfhj[ubh)}(h**Description**h]j)}(hjN\h]h Description}(hjP\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjL\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj[ubh)}(h=This function is called when a worker returns from schedule()h]h=This function is called when a worker returns from schedule()}(hjd\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj[ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwq_worker_sleeping (C function)c.wq_worker_sleepinghNtauh1jhjhhhNhNubj)}(hhh](j)}(h2void wq_worker_sleeping (struct task_struct *task)h]j)}(h1void wq_worker_sleeping(struct task_struct *task)h](j)}(hvoidh]hvoid}(hj\hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj\hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj\hhhj\hMubj)}(hwq_worker_sleepingh]j)}(hwq_worker_sleepingh]hwq_worker_sleeping}(hj\hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj\ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj\hhhj\hMubj|)}(h(struct task_struct *task)h]j)}(hstruct task_struct *taskh](j)}(hjh]hstruct}(hj\hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\ubj)}(h h]h }(hj\hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj\ubh)}(hhh]j)}(h task_structh]h task_struct}(hj\hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj\modnameN classnameNj7j:)}j=]j@)}j3j\sbc.wq_worker_sleepingasbuh1hhj\ubj)}(h h]h }(hj]hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj\ubjU)}(hjuh]h*}(hj]hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj\ubj)}(htaskh]htask}(hj)]hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj\ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj\ubah}(h]h ]h"]h$]h&]jjuh1j{hj\hhhj\hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj\hhhj\hMubah}(h]j\ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj\hMhj\hhubjC)}(hhh]h)}(ha worker is going to sleeph]ha worker is going to sleep}(hjS]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjP]hhubah}(h]h ]h"]h$]h&]uh1jBhj\hhhj\hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejk]jfjk]jgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct task_struct *task`` task going to sleep **Description** This function is called from schedule() when a busy worker is going to sleep.h](h)}(h**Parameters**h]j)}(hju]h]h Parameters}(hjw]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjs]ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjo]ubjg)}(hhh]jl)}(h1``struct task_struct *task`` task going to sleep h](jr)}(h``struct task_struct *task``h]j)}(hj]h]hstruct task_struct *task}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj]ubj)}(hhh]h)}(htask going to sleeph]htask going to sleep}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj]hMhj]ubah}(h]h ]h"]h$]h&]uh1jhj]ubeh}(h]h ]h"]h$]h&]uh1jkhj]hMhj]ubah}(h]h ]h"]h$]h&]uh1jfhjo]ubh)}(h**Description**h]j)}(hj]h]h Description}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjo]ubh)}(hMThis function is called from schedule() when a busy worker is going to sleep.h]hMThis function is called from schedule() when a busy worker is going to sleep.}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjo]ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwq_worker_tick (C function)c.wq_worker_tickhNtauh1jhjhhhNhNubj)}(hhh](j)}(h.void wq_worker_tick (struct task_struct *task)h]j)}(h-void wq_worker_tick(struct task_struct *task)h](j)}(hvoidh]hvoid}(hj^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj#^hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj^hhhj"^hMubj)}(hwq_worker_tickh]j)}(hwq_worker_tickh]hwq_worker_tick}(hj5^hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj1^ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj^hhhj"^hMubj|)}(h(struct task_struct *task)h]j)}(hstruct task_struct *taskh](j)}(hjh]hstruct}(hjQ^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjM^ubj)}(h h]h }(hj^^hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjM^ubh)}(hhh]j)}(h task_structh]h task_struct}(hjo^hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjl^ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjq^modnameN classnameNj7j:)}j=]j@)}j3j7^sbc.wq_worker_tickasbuh1hhjM^ubj)}(h h]h }(hj^hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjM^ubjU)}(hjuh]h*}(hj^hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjM^ubj)}(htaskh]htask}(hj^hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjM^ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjI^ubah}(h]h ]h"]h$]h&]jjuh1j{hj^hhhj"^hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj ^hhhj"^hMubah}(h]j^ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj"^hMhj ^hhubjC)}(hhh]h)}(h4a scheduler tick occurred while a kworker is runningh]h4a scheduler tick occurred while a kworker is running}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj^hhubah}(h]h ]h"]h$]h&]uh1jBhj ^hhhj"^hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej^jfj^jgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct task_struct *task`` task currently running **Description** Called from sched_tick(). We're in the IRQ context and the current worker's fields which follow the 'K' locking rule can be accessed safely.h](h)}(h**Parameters**h]j)}(hj^h]h Parameters}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj^ubjg)}(hhh]jl)}(h4``struct task_struct *task`` task currently running h](jr)}(h``struct task_struct *task``h]j)}(hj_h]hstruct task_struct *task}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj_ubj)}(hhh]h)}(htask currently runningh]htask currently running}(hj._hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj*_hMhj+_ubah}(h]h ]h"]h$]h&]uh1jhj_ubeh}(h]h ]h"]h$]h&]uh1jkhj*_hMhj _ubah}(h]h ]h"]h$]h&]uh1jfhj^ubh)}(h**Description**h]j)}(hjP_h]h Description}(hjR_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjN_ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj^ubh)}(hCalled from sched_tick(). We're in the IRQ context and the current worker's fields which follow the 'K' locking rule can be accessed safely.h]hCalled from sched_tick(). We’re in the IRQ context and the current worker’s fields which follow the ‘K’ locking rule can be accessed safely.}(hjf_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj^ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j wq_worker_last_func (C function)c.wq_worker_last_funchNtauh1jhjhhhNhNubj)}(hhh](j)}(h:work_func_t wq_worker_last_func (struct task_struct *task)h]j)}(h9work_func_t wq_worker_last_func(struct task_struct *task)h](h)}(hhh]j)}(h work_func_th]h work_func_t}(hj_hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj_ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj_modnameN classnameNj7j:)}j=]j@)}j3wq_worker_last_funcsbc.wq_worker_last_funcasbuh1hhj_hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj_hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj_hhhj_hMubj)}(hwq_worker_last_funch]j)}(hj_h]hwq_worker_last_func}(hj_hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj_ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj_hhhj_hMubj|)}(h(struct task_struct *task)h]j)}(hstruct task_struct *taskh](j)}(hjh]hstruct}(hj_hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj_ubj)}(h h]h }(hj_hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj_ubh)}(hhh]j)}(h task_structh]h task_struct}(hj`hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj`ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj`modnameN classnameNj7j:)}j=]j_c.wq_worker_last_funcasbuh1hhj_ubj)}(h h]h }(hj#`hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj_ubjU)}(hjuh]h*}(hj1`hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj_ubj)}(htaskh]htask}(hj>`hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj_ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj_ubah}(h]h ]h"]h$]h&]jjuh1j{hj_hhhj_hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj_hhhj_hMubah}(h]j_ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj_hMhj_hhubjC)}(hhh]h)}(h$retrieve worker's last work functionh]h&retrieve worker’s last work function}(hjh`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhje`hhubah}(h]h ]h"]h$]h&]uh1jBhj_hhhj_hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej`jfj`jgjhjiuh1jhhhjhNhNubjk)}(hXx**Parameters** ``struct task_struct *task`` Task to retrieve last work function of. **Description** Determine the last function a worker executed. This is called from the scheduler to get a worker's last known identity. This function is called during schedule() when a kworker is going to sleep. It's used by psi to identify aggregation workers during dequeuing, to allow periodic aggregation to shut-off when that worker is the last task in the system or cgroup to go to sleep. As this function doesn't involve any workqueue-related locking, it only returns stable values when called from inside the scheduler's queuing and dequeuing paths, when **task**, which must be a kworker, is guaranteed to not be processing any works. **Context** raw_spin_lock_irq(rq->lock) **Return** The last work function ``current`` executed as a worker, NULL if it hasn't executed any work yet.h](h)}(h**Parameters**h]j)}(hj`h]h Parameters}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj`ubjg)}(hhh]jl)}(hE``struct task_struct *task`` Task to retrieve last work function of. h](jr)}(h``struct task_struct *task``h]j)}(hj`h]hstruct task_struct *task}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj`ubj)}(hhh]h)}(h'Task to retrieve last work function of.h]h'Task to retrieve last work function of.}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj`hMhj`ubah}(h]h ]h"]h$]h&]uh1jhj`ubeh}(h]h ]h"]h$]h&]uh1jkhj`hMhj`ubah}(h]h ]h"]h$]h&]uh1jfhj`ubh)}(h**Description**h]j)}(hj`h]h Description}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj`ubh)}(hwDetermine the last function a worker executed. This is called from the scheduler to get a worker's last known identity.h]hyDetermine the last function a worker executed. This is called from the scheduler to get a worker’s last known identity.}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj`ubh)}(hXThis function is called during schedule() when a kworker is going to sleep. It's used by psi to identify aggregation workers during dequeuing, to allow periodic aggregation to shut-off when that worker is the last task in the system or cgroup to go to sleep.h]hXThis function is called during schedule() when a kworker is going to sleep. It’s used by psi to identify aggregation workers during dequeuing, to allow periodic aggregation to shut-off when that worker is the last task in the system or cgroup to go to sleep.}(hj ahhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj`ubh)}(hAs this function doesn't involve any workqueue-related locking, it only returns stable values when called from inside the scheduler's queuing and dequeuing paths, when **task**, which must be a kworker, is guaranteed to not be processing any works.h](hAs this function doesn’t involve any workqueue-related locking, it only returns stable values when called from inside the scheduler’s queuing and dequeuing paths, when }(hjahhhNhNubj)}(h**task**h]htask}(hj ahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubhH, which must be a kworker, is guaranteed to not be processing any works.}(hjahhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj`ubh)}(h **Context**h]j)}(hj;ah]hContext}(hj=ahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9aubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj`ubh)}(hraw_spin_lock_irq(rq->lock)h]hraw_spin_lock_irq(rq->lock)}(hjQahhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj`ubh)}(h **Return**h]j)}(hjbah]hReturn}(hjdahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`aubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj`ubh)}(haThe last work function ``current`` executed as a worker, NULL if it hasn't executed any work yet.h](hThe last work function }(hjxahhhNhNubj)}(h ``current``h]hcurrent}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxaubhA executed as a worker, NULL if it hasn’t executed any work yet.}(hjxahhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj`ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwq_node_nr_active (C function)c.wq_node_nr_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(hTstruct wq_node_nr_active * wq_node_nr_active (struct workqueue_struct *wq, int node)h]j)}(hRstruct wq_node_nr_active *wq_node_nr_active(struct workqueue_struct *wq, int node)h](j)}(hjh]hstruct}(hjahhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjahhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM&ubj)}(h h]h }(hjahhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjahhhjahM&ubh)}(hhh]j)}(hwq_node_nr_activeh]hwq_node_nr_active}(hjahhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjamodnameN classnameNj7j:)}j=]j@)}j3wq_node_nr_activesbc.wq_node_nr_activeasbuh1hhjahhhjahM&ubj)}(h h]h }(hjahhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjahhhjahM&ubjU)}(hjuh]h*}(hjbhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjahhhjahM&ubj)}(hwq_node_nr_activeh]j)}(hjah]hwq_node_nr_active}(hjbhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjbubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjahhhjahM&ubj|)}(h'(struct workqueue_struct *wq, int node)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj3bhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/bubj)}(h h]h }(hj@bhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/bubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjQbhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjNbubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjSbmodnameN classnameNj7j:)}j=]jac.wq_node_nr_activeasbuh1hhj/bubj)}(h h]h }(hjobhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/bubjU)}(hjuh]h*}(hj}bhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj/bubj)}(hwqh]hwq}(hjbhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj/bubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj+bubj)}(hint nodeh](j)}(hinth]hint}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjbubj)}(h h]h }(hjbhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjbubj)}(hnodeh]hnode}(hjbhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjbubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj+bubeh}(h]h ]h"]h$]h&]jjuh1j{hjahhhjahM&ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjahhhjahM&ubah}(h]jaah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjahM&hjahhubjC)}(hhh]h)}(h"Determine wq_node_nr_active to useh]h"Determine wq_node_nr_active to use}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM&hjbhhubah}(h]h ]h"]h$]h&]uh1jBhjahhhjahM&ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejcjfjcjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue of interest ``int node`` NUMA node, can be ``NUMA_NO_NODE`` **Description** Determine wq_node_nr_active to use for **wq** on **node**. Returns: - ``NULL`` for per-cpu workqueues as they don't need to use shared nr_active. - node_nr_active[nr_node_ids] if **node** is ``NUMA_NO_NODE``. - Otherwise, node_nr_active[**node**].h](h)}(h**Parameters**h]j)}(hj ch]h Parameters}(hj chhhNhNubah}(h]h ]h"]h$]h&]uh1jhj cubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM*hjcubjg)}(hhh](jl)}(h6``struct workqueue_struct *wq`` workqueue of interest h](jr)}(h``struct workqueue_struct *wq``h]j)}(hj*ch]hstruct workqueue_struct *wq}(hj,chhhNhNubah}(h]h ]h"]h$]h&]uh1jhj(cubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM'hj$cubj)}(hhh]h)}(hworkqueue of interesth]hworkqueue of interest}(hjCchhhNhNubah}(h]h ]h"]h$]h&]uh1hhj?chM'hj@cubah}(h]h ]h"]h$]h&]uh1jhj$cubeh}(h]h ]h"]h$]h&]uh1jkhj?chM'hj!cubjl)}(h0``int node`` NUMA node, can be ``NUMA_NO_NODE`` h](jr)}(h ``int node``h]j)}(hjcch]hint node}(hjechhhNhNubah}(h]h ]h"]h$]h&]uh1jhjacubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM(hj]cubj)}(hhh]h)}(h"NUMA node, can be ``NUMA_NO_NODE``h](hNUMA node, can be }(hj|chhhNhNubj)}(h``NUMA_NO_NODE``h]h NUMA_NO_NODE}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|cubeh}(h]h ]h"]h$]h&]uh1hhjxchM(hjycubah}(h]h ]h"]h$]h&]uh1jhj]cubeh}(h]h ]h"]h$]h&]uh1jkhjxchM(hj!cubeh}(h]h ]h"]h$]h&]uh1jfhjcubh)}(h**Description**h]j)}(hjch]h Description}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM*hjcubh)}(hCDetermine wq_node_nr_active to use for **wq** on **node**. Returns:h](h'Determine wq_node_nr_active to use for }(hjchhhNhNubj)}(h**wq**h]hwq}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubh on }(hjchhhNhNubj)}(h**node**h]hnode}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubh . Returns:}(hjchhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM)hjcubj )}(hhh](j%)}(hL``NULL`` for per-cpu workqueues as they don't need to use shared nr_active. h]h)}(hK``NULL`` for per-cpu workqueues as they don't need to use shared nr_active.h](j)}(h``NULL``h]hNULL}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubhE for per-cpu workqueues as they don’t need to use shared nr_active.}(hjchhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM+hjcubah}(h]h ]h"]h$]h&]uh1j$hjcubj%)}(h=node_nr_active[nr_node_ids] if **node** is ``NUMA_NO_NODE``. h]h)}(hnode_nr_active**[]->max. **wq** must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between **wq->min_active** and max_active.h](h)}(h**Parameters**h]j)}(hjeh]h Parameters}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjeubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMChjeubjg)}(hhh](jl)}(h4``struct workqueue_struct *wq`` workqueue to update h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjeh]hstruct workqueue_struct *wq}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjeubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM@hjeubj)}(hhh]h)}(hworkqueue to updateh]hworkqueue to update}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhjehM@hjeubah}(h]h ]h"]h$]h&]uh1jhjeubeh}(h]h ]h"]h$]h&]uh1jkhjehM@hjeubjl)}(hE``int off_cpu`` CPU that's going down, -1 if a CPU is not going down h](jr)}(h``int off_cpu``h]j)}(hjfh]h int off_cpu}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMAhjfubj)}(hhh]h)}(h4CPU that's going down, -1 if a CPU is not going downh]h6CPU that’s going down, -1 if a CPU is not going down}(hj5fhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj1fhMAhj2fubah}(h]h ]h"]h$]h&]uh1jhjfubeh}(h]h ]h"]h$]h&]uh1jkhj1fhMAhjeubeh}(h]h ]h"]h$]h&]uh1jfhjeubh)}(h**Description**h]j)}(hjWfh]h Description}(hjYfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUfubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMChjeubh)}(hUpdate **wq->node_nr_active**[]->max. **wq** must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between **wq->min_active** and max_active.h](hUpdate }(hjmfhhhNhNubj)}(h%**wq->node_nr_active**[]->max. **wq**h]h!wq->node_nr_active**[]->max. **wq}(hjufhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmfubh must be unbound. max_active is distributed among nodes according to the proportions of numbers of online cpus. The result is always between }(hjmfhhhNhNubj)}(h**wq->min_active**h]hwq->min_active}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmfubh and max_active.}(hjmfhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMBhjeubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jget_pwq (C function) c.get_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h)void get_pwq (struct pool_workqueue *pwq)h]j)}(h(void get_pwq(struct pool_workqueue *pwq)h](j)}(hvoidh]hvoid}(hjfhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjfhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMsubj)}(h h]h }(hjfhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjfhhhjfhMsubj)}(hget_pwqh]j)}(hget_pwqh]hget_pwq}(hjfhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjfubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjfhhhjfhMsubj|)}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j)}(hjh]hstruct}(hjfhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjfubj)}(h h]h }(hj ghhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjfubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjghhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjgmodnameN classnameNj7j:)}j=]j@)}j3jfsb c.get_pwqasbuh1hhjfubj)}(h h]h }(hj;ghhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjfubjU)}(hjuh]h*}(hjIghhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjfubj)}(hpwqh]hpwq}(hjVghhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjfubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjfubah}(h]h ]h"]h$]h&]jjuh1j{hjfhhhjfhMsubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjfhhhjfhMsubah}(h]jfah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjfhMshjfhhubjC)}(hhh]h)}(h6get an extra reference on the specified pool_workqueueh]h6get an extra reference on the specified pool_workqueue}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMshj}ghhubah}(h]h ]h"]h$]h&]uh1jBhjfhhhjfhMsubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejgjfjgjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue to get **Description** Obtain an extra reference on **pwq**. The caller should guarantee that **pwq** has positive refcnt and be holding the matching pool->lock.h](h)}(h**Parameters**h]j)}(hjgh]h Parameters}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMwhjgubjg)}(hhh]jl)}(h5``struct pool_workqueue *pwq`` pool_workqueue to get h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjgh]hstruct pool_workqueue *pwq}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMthjgubj)}(hhh]h)}(hpool_workqueue to geth]hpool_workqueue to get}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhjghMthjgubah}(h]h ]h"]h$]h&]uh1jhjgubeh}(h]h ]h"]h$]h&]uh1jkhjghMthjgubah}(h]h ]h"]h$]h&]uh1jfhjgubh)}(h**Description**h]j)}(hjgh]h Description}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMvhjgubh)}(hObtain an extra reference on **pwq**. The caller should guarantee that **pwq** has positive refcnt and be holding the matching pool->lock.h](hObtain an extra reference on }(hjhhhhNhNubj)}(h**pwq**h]hpwq}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubh$. The caller should guarantee that }(hjhhhhNhNubj)}(h**pwq**h]hpwq}(hj,hhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubh< has positive refcnt and be holding the matching pool->lock.}(hjhhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMuhjgubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jput_pwq (C function) c.put_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h)void put_pwq (struct pool_workqueue *pwq)h]j)}(h(void put_pwq(struct pool_workqueue *pwq)h](j)}(hvoidh]hvoid}(hjehhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjahhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjthhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjahhhhjshhMubj)}(hput_pwqh]j)}(hput_pwqh]hput_pwq}(hjhhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjhubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjahhhhjshhMubj|)}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j)}(hjh]hstruct}(hjhhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhubj)}(h h]h }(hjhhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjhhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjhubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjhmodnameN classnameNj7j:)}j=]j@)}j3jhsb c.put_pwqasbuh1hhjhubj)}(h h]h }(hjhhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhubjU)}(hjuh]h*}(hjhhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjhubj)}(hpwqh]hpwq}(hjhhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjhubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjhubah}(h]h ]h"]h$]h&]jjuh1j{hjahhhhjshhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj]hhhhjshhMubah}(h]jXhah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjshhMhjZhhhubjC)}(hhh]h)}(hput a pool_workqueue referenceh]hput a pool_workqueue reference}(hj%ihhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj"ihhubah}(h]h ]h"]h$]h&]uh1jBhjZhhhhjshhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej=ijfj=ijgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue to put **Description** Drop a reference of **pwq**. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.h](h)}(h**Parameters**h]j)}(hjGih]h Parameters}(hjIihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEiubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjAiubjg)}(hhh]jl)}(h5``struct pool_workqueue *pwq`` pool_workqueue to put h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjfih]hstruct pool_workqueue *pwq}(hjhihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdiubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj`iubj)}(hhh]h)}(hpool_workqueue to puth]hpool_workqueue to put}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhj{ihMhj|iubah}(h]h ]h"]h$]h&]uh1jhj`iubeh}(h]h ]h"]h$]h&]uh1jkhj{ihMhj]iubah}(h]h ]h"]h$]h&]uh1jfhjAiubh)}(h**Description**h]j)}(hjih]h Description}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjiubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjAiubh)}(hDrop a reference of **pwq**. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.h](hDrop a reference of }(hjihhhNhNubj)}(h**pwq**h]hpwq}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjiubho. If its refcnt reaches zero, schedule its destruction. The caller should be holding the matching pool->lock.}(hjihhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjAiubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jput_pwq_unlocked (C function)c.put_pwq_unlockedhNtauh1jhjhhhNhNubj)}(hhh](j)}(h2void put_pwq_unlocked (struct pool_workqueue *pwq)h]j)}(h1void put_pwq_unlocked(struct pool_workqueue *pwq)h](j)}(hvoidh]hvoid}(hjihhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjihhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjihhhjjhMubj)}(hput_pwq_unlockedh]j)}(hput_pwq_unlockedh]hput_pwq_unlocked}(hjjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjihhhjjhMubj|)}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j)}(hjh]hstruct}(hj5jhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1jubj)}(h h]h }(hjBjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1jubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjSjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjPjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjUjmodnameN classnameNj7j:)}j=]j@)}j3jjsbc.put_pwq_unlockedasbuh1hhj1jubj)}(h h]h }(hjsjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1jubjU)}(hjuh]h*}(hjjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj1jubj)}(hpwqh]hpwq}(hjjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj1jubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj-jubah}(h]h ]h"]h$]h&]jjuh1j{hjihhhjjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjihhhjjhMubah}(h]jiah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjjhMhjihhubjC)}(hhh]h)}(h+put_pwq() with surrounding pool lock/unlockh]h+put_pwq() with surrounding pool lock/unlock}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjjhhubah}(h]h ]h"]h$]h&]uh1jBhjihhhjjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjjfjjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue to put (can be ``NULL``) **Description** put_pwq() with locking. This function also allows ``NULL`` **pwq**.h](h)}(h**Parameters**h]j)}(hjjh]h Parameters}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjjubjg)}(hhh]jl)}(hG``struct pool_workqueue *pwq`` pool_workqueue to put (can be ``NULL``) h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjjh]hstruct pool_workqueue *pwq}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjjubj)}(hhh]h)}(h'pool_workqueue to put (can be ``NULL``)h](hpool_workqueue to put (can be }(hjkhhhNhNubj)}(h``NULL``h]hNULL}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubh)}(hjkhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjkhMhjkubah}(h]h ]h"]h$]h&]uh1jhjjubeh}(h]h ]h"]h$]h&]uh1jkhjkhMhjjubah}(h]h ]h"]h$]h&]uh1jfhjjubh)}(h**Description**h]j)}(hjFkh]h Description}(hjHkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDkubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjjubh)}(hDput_pwq() with locking. This function also allows ``NULL`` **pwq**.h](h3put_pwq() with locking. This function also allows }(hj\khhhNhNubj)}(h``NULL``h]hNULL}(hjdkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\kubh }(hj\khhhNhNubj)}(h**pwq**h]hpwq}(hjvkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\kubh.}(hj\khhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!pwq_tryinc_nr_active (C function)c.pwq_tryinc_nr_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(hAbool pwq_tryinc_nr_active (struct pool_workqueue *pwq, bool fill)h]j)}(h@bool pwq_tryinc_nr_active(struct pool_workqueue *pwq, bool fill)h](j)}(hj*h]hbool}(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjkhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjkhhhjkhMubj)}(hpwq_tryinc_nr_activeh]j)}(hpwq_tryinc_nr_activeh]hpwq_tryinc_nr_active}(hjkhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjkubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjkhhhjkhMubj|)}(h'(struct pool_workqueue *pwq, bool fill)h](j)}(hstruct pool_workqueue *pwqh](j)}(hjh]hstruct}(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubj)}(h h]h }(hjkhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjkubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hj lhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj lmodnameN classnameNj7j:)}j=]j@)}j3jksbc.pwq_tryinc_nr_activeasbuh1hhjkubj)}(h h]h }(hj)lhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjkubjU)}(hjuh]h*}(hj7lhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjkubj)}(hpwqh]hpwq}(hjDlhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjkubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjkubj)}(h bool fillh](j)}(hj*h]hbool}(hj]lhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjYlubj)}(h h]h }(hjjlhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjYlubj)}(hfillh]hfill}(hjxlhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjYlubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjkubeh}(h]h ]h"]h$]h&]jjuh1j{hjkhhhjkhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjkhhhjkhMubah}(h]jkah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjkhMhjkhhubjC)}(hhh]h)}(h$Try to increment nr_active for a pwqh]h$Try to increment nr_active for a pwq}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjlhhubah}(h]h ]h"]h$]h&]uh1jBhjkhhhjkhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejljfjljgjhjiuh1jhhhjhNhNubjk)}(hX-**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue of interest ``bool fill`` max_active may have increased, try to increase concurrency level **Description** Try to increment nr_active for **pwq**. Returns ``true`` if an nr_active count is successfully obtained. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjlh]h Parameters}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjlubjg)}(hhh](jl)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjlh]hstruct pool_workqueue *pwq}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjlubj)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjlhMhjlubah}(h]h ]h"]h$]h&]uh1jhjlubeh}(h]h ]h"]h$]h&]uh1jkhjlhMhjlubjl)}(hO``bool fill`` max_active may have increased, try to increase concurrency level h](jr)}(h ``bool fill``h]j)}(hjmh]h bool fill}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjmubj)}(hhh]h)}(h@max_active may have increased, try to increase concurrency levelh]h@max_active may have increased, try to increase concurrency level}(hj5mhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj1mhMhj2mubah}(h]h ]h"]h$]h&]uh1jhjmubeh}(h]h ]h"]h$]h&]uh1jkhj1mhMhjlubeh}(h]h ]h"]h$]h&]uh1jfhjlubh)}(h**Description**h]j)}(hjWmh]h Description}(hjYmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUmubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjlubh)}(h}Try to increment nr_active for **pwq**. Returns ``true`` if an nr_active count is successfully obtained. ``false`` otherwise.h](hTry to increment nr_active for }(hjmmhhhNhNubj)}(h**pwq**h]hpwq}(hjumhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmmubh . Returns }(hjmmhhhNhNubj)}(h``true``h]htrue}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmmubh1 if an nr_active count is successfully obtained. }(hjmmhhhNhNubj)}(h ``false``h]hfalse}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmmubh otherwise.}(hjmmhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjlubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j(pwq_activate_first_inactive (C function)c.pwq_activate_first_inactivehNtauh1jhjhhhNhNubj)}(hhh](j)}(hHbool pwq_activate_first_inactive (struct pool_workqueue *pwq, bool fill)h]j)}(hGbool pwq_activate_first_inactive(struct pool_workqueue *pwq, bool fill)h](j)}(hj*h]hbool}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjmhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjmhhhjmhMubj)}(hpwq_activate_first_inactiveh]j)}(hpwq_activate_first_inactiveh]hpwq_activate_first_inactive}(hjmhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjmubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjmhhhjmhMubj|)}(h'(struct pool_workqueue *pwq, bool fill)h](j)}(hstruct pool_workqueue *pwqh](j)}(hjh]hstruct}(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj nubj)}(h h]h }(hjnhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj nubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hj,nhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj)nubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj.nmodnameN classnameNj7j:)}j=]j@)}j3jmsbc.pwq_activate_first_inactiveasbuh1hhj nubj)}(h h]h }(hjLnhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj nubjU)}(hjuh]h*}(hjZnhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj nubj)}(hpwqh]hpwq}(hjgnhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj nubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjnubj)}(h bool fillh](j)}(hj*h]hbool}(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|nubj)}(h h]h }(hjnhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj|nubj)}(hfillh]hfill}(hjnhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj|nubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjnubeh}(h]h ]h"]h$]h&]jjuh1j{hjmhhhjmhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjmhhhjmhMubah}(h]jmah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjmhMhjmhhubjC)}(hhh]h)}(h.Activate the first inactive work item on a pwqh]h.Activate the first inactive work item on a pwq}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjnhhubah}(h]h ]h"]h$]h&]uh1jBhjmhhhjmhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejnjfjnjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct pool_workqueue *pwq`` pool_workqueue of interest ``bool fill`` max_active may have increased, try to increase concurrency level **Description** Activate the first inactive work item of **pwq** if available and allowed by max_active limit. Returns ``true`` if an inactive work item has been activated. ``false`` if no inactive work item is found or max_active limit is reached.h](h)}(h**Parameters**h]j)}(hjnh]h Parameters}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjnubjg)}(hhh](jl)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjoh]hstruct pool_workqueue *pwq}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjoubj)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhjohMhjoubah}(h]h ]h"]h$]h&]uh1jhjoubeh}(h]h ]h"]h$]h&]uh1jkhjohMhjnubjl)}(hO``bool fill`` max_active may have increased, try to increase concurrency level h](jr)}(h ``bool fill``h]j)}(hj?oh]h bool fill}(hjAohhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=oubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj9oubj)}(hhh]h)}(h@max_active may have increased, try to increase concurrency levelh]h@max_active may have increased, try to increase concurrency level}(hjXohhhNhNubah}(h]h ]h"]h$]h&]uh1hhjTohMhjUoubah}(h]h ]h"]h$]h&]uh1jhj9oubeh}(h]h ]h"]h$]h&]uh1jkhjTohMhjnubeh}(h]h ]h"]h$]h&]uh1jfhjnubh)}(h**Description**h]j)}(hjzoh]h Description}(hj|ohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxoubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjnubh)}(h^Activate the first inactive work item of **pwq** if available and allowed by max_active limit.h](h)Activate the first inactive work item of }(hjohhhNhNubj)}(h**pwq**h]hpwq}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubh. if available and allowed by max_active limit.}(hjohhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjnubh)}(hReturns ``true`` if an inactive work item has been activated. ``false`` if no inactive work item is found or max_active limit is reached.h](hReturns }(hjohhhNhNubj)}(h``true``h]htrue}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubh. if an inactive work item has been activated. }(hjohhhNhNubj)}(h ``false``h]hfalse}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubhB if no inactive work item is found or max_active limit is reached.}(hjohhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjnubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](junplug_oldest_pwq (C function)c.unplug_oldest_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void unplug_oldest_pwq (struct workqueue_struct *wq)h]j)}(h3void unplug_oldest_pwq(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjphhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjphhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM*ubj)}(h h]h }(hjphhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjphhhjphM*ubj)}(hunplug_oldest_pwqh]j)}(hunplug_oldest_pwqh]hunplug_oldest_pwq}(hj%phhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj!pubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjphhhjphM*ubj|)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjAphhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=pubj)}(h h]h }(hjNphhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj=pubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj_phhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj\pubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjapmodnameN classnameNj7j:)}j=]j@)}j3j'psbc.unplug_oldest_pwqasbuh1hhj=pubj)}(h h]h }(hjphhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj=pubjU)}(hjuh]h*}(hjphhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj=pubj)}(hwqh]hwq}(hjphhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj=pubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj9pubah}(h]h ]h"]h$]h&]jjuh1j{hjphhhjphM*ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjohhhjphM*ubah}(h]joah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjphM*hjohhubjC)}(hhh]h)}(h unplug the oldest pool_workqueueh]h unplug the oldest pool_workqueue}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM*hjphhubah}(h]h ]h"]h$]h&]uh1jBhjohhhjphM*ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejpjfjpjgjhjiuh1jhhhjhNhNubjk)}(hX!**Parameters** ``struct workqueue_struct *wq`` workqueue_struct where its oldest pwq is to be unplugged **Description** This function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering:: dfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6 When the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq's are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.h](h)}(h**Parameters**h]j)}(hjph]h Parameters}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM.hjpubjg)}(hhh]jl)}(hY``struct workqueue_struct *wq`` workqueue_struct where its oldest pwq is to be unplugged h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjqh]hstruct workqueue_struct *wq}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjqubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM+hjpubj)}(hhh]h)}(h8workqueue_struct where its oldest pwq is to be unpluggedh]h8workqueue_struct where its oldest pwq is to be unplugged}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjqhM+hjqubah}(h]h ]h"]h$]h&]uh1jhjpubeh}(h]h ]h"]h$]h&]uh1jkhjqhM+hjpubah}(h]h ]h"]h$]h&]uh1jfhjpubh)}(h**Description**h]j)}(hj@qh]h Description}(hjBqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>qubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM-hjpubh)}(hThis function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering::h]hThis function should only be called for ordered workqueues where only the oldest pwq is unplugged, the others are plugged to suspend execution to ensure proper work item ordering:}(hjVqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM,hjpubj)}(hdfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6h]hdfl_pwq --------------+ [P] - plugged | v pwqs -> A -> B [P] -> C [P] (newest) | | | 1 3 5 | | | 2 4 6}hjeqsbah}(h]h ]h"]h$]h&]jjuh1jhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM0hjpubh)}(hWhen the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq's are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.h]hWhen the oldest pwq is drained and removed, this function should be called to unplug the next oldest one to start its work item execution. Note that pwq’s are linked into wq->pwqs with the oldest first, so the first one in the list is the oldest.}(hjtqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM9hjpubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&node_activate_pending_pwq (C function)c.node_activate_pending_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(h_void node_activate_pending_pwq (struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h]j)}(h^void node_activate_pending_pwq(struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h](j)}(hvoidh]hvoid}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM^ubj)}(h h]h }(hjqhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjqhhhjqhM^ubj)}(hnode_activate_pending_pwqh]j)}(hnode_activate_pending_pwqh]hnode_activate_pending_pwq}(hjqhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjqubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjqhhhjqhM^ubj|)}(h@(struct wq_node_nr_active *nna, struct worker_pool *caller_pool)h](j)}(hstruct wq_node_nr_active *nnah](j)}(hjh]hstruct}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjqubj)}(h h]h }(hjqhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjqubh)}(hhh]j)}(hwq_node_nr_activeh]hwq_node_nr_active}(hjqhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjqubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjrmodnameN classnameNj7j:)}j=]j@)}j3jqsbc.node_activate_pending_pwqasbuh1hhjqubj)}(h h]h }(hjrhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjqubjU)}(hjuh]h*}(hj,rhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjqubj)}(hnnah]hnna}(hj9rhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjqubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjqubj)}(hstruct worker_pool *caller_poolh](j)}(hjh]hstruct}(hjRrhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNrubj)}(h h]h }(hj_rhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjNrubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjprhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjmrubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjrrmodnameN classnameNj7j:)}j=]jrc.node_activate_pending_pwqasbuh1hhjNrubj)}(h h]h }(hjrhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjNrubjU)}(hjuh]h*}(hjrhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjNrubj)}(h caller_poolh]h caller_pool}(hjrhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjNrubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjqubeh}(h]h ]h"]h$]h&]jjuh1j{hjqhhhjqhM^ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjqhhhjqhM^ubah}(h]jqah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjqhM^hjqhhubjC)}(hhh]h)}(h-Activate a pending pwq on a wq_node_nr_activeh]h-Activate a pending pwq on a wq_node_nr_active}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM^hjrhhubah}(h]h ]h"]h$]h&]uh1jBhjqhhhjqhM^ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejrjfjrjgjhjiuh1jhhhjhNhNubjk)}(hXT**Parameters** ``struct wq_node_nr_active *nna`` wq_node_nr_active to activate a pending pwq for ``struct worker_pool *caller_pool`` worker_pool the caller is locking **Description** Activate a pwq in **nna->pending_pwqs**. Called with **caller_pool** locked. **caller_pool** may be unlocked and relocked to lock other worker_pools.h](h)}(h**Parameters**h]j)}(hjrh]h Parameters}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjrubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMbhjrubjg)}(hhh](jl)}(hR``struct wq_node_nr_active *nna`` wq_node_nr_active to activate a pending pwq for h](jr)}(h!``struct wq_node_nr_active *nna``h]j)}(hjsh]hstruct wq_node_nr_active *nna}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM_hjsubj)}(hhh]h)}(h/wq_node_nr_active to activate a pending pwq forh]h/wq_node_nr_active to activate a pending pwq for}(hj-shhhNhNubah}(h]h ]h"]h$]h&]uh1hhj)shM_hj*subah}(h]h ]h"]h$]h&]uh1jhjsubeh}(h]h ]h"]h$]h&]uh1jkhj)shM_hj subjl)}(hF``struct worker_pool *caller_pool`` worker_pool the caller is locking h](jr)}(h#``struct worker_pool *caller_pool``h]j)}(hjMsh]hstruct worker_pool *caller_pool}(hjOshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKsubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM`hjGsubj)}(hhh]h)}(h!worker_pool the caller is lockingh]h!worker_pool the caller is locking}(hjfshhhNhNubah}(h]h ]h"]h$]h&]uh1hhjbshM`hjcsubah}(h]h ]h"]h$]h&]uh1jhjGsubeh}(h]h ]h"]h$]h&]uh1jkhjbshM`hj subeh}(h]h ]h"]h$]h&]uh1jfhjrubh)}(h**Description**h]j)}(hjsh]h Description}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMbhjrubh)}(hActivate a pwq in **nna->pending_pwqs**. Called with **caller_pool** locked. **caller_pool** may be unlocked and relocked to lock other worker_pools.h](hActivate a pwq in }(hjshhhNhNubj)}(h**nna->pending_pwqs**h]hnna->pending_pwqs}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubh. Called with }(hjshhhNhNubj)}(h**caller_pool**h]h caller_pool}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubh locked. }(hjshhhNhNubj)}(h**caller_pool**h]h caller_pool}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubh9 may be unlocked and relocked to lock other worker_pools.}(hjshhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMahjrubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jpwq_dec_nr_active (C function)c.pwq_dec_nr_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(h3void pwq_dec_nr_active (struct pool_workqueue *pwq)h]j)}(h2void pwq_dec_nr_active(struct pool_workqueue *pwq)h](j)}(hvoidh]hvoid}(hjthhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjshhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjthhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjshhhjthMubj)}(hpwq_dec_nr_activeh]j)}(hpwq_dec_nr_activeh]hpwq_dec_nr_active}(hj$thhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj tubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjshhhjthMubj|)}(h(struct pool_workqueue *pwq)h]j)}(hstruct pool_workqueue *pwqh](j)}(hjh]hstruct}(hj@thhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjpool->lock**.h](h)}(h**Parameters**h]j)}(hjth]h Parameters}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjtubjg)}(hhh]jl)}(h:``struct pool_workqueue *pwq`` pool_workqueue of interest h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjuh]hstruct pool_workqueue *pwq}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjuubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjtubj)}(hhh]h)}(hpool_workqueue of interesth]hpool_workqueue of interest}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuhMhjuubah}(h]h ]h"]h$]h&]uh1jhjtubeh}(h]h ]h"]h$]h&]uh1jkhjuhMhjtubah}(h]h ]h"]h$]h&]uh1jfhjtubh)}(h**Description**h]j)}(hj?uh]h Description}(hjAuhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=uubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjtubh)}(hDecrement **pwq**'s nr_active and try to activate the first inactive work item. For unbound workqueues, this function may temporarily drop **pwq->pool->lock**.h](h Decrement }(hjUuhhhNhNubj)}(h**pwq**h]hpwq}(hj]uhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUuubh|’s nr_active and try to activate the first inactive work item. For unbound workqueues, this function may temporarily drop }(hjUuhhhNhNubj)}(h**pwq->pool->lock**h]hpwq->pool->lock}(hjouhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUuubh.}(hjUuhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjtubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!pwq_dec_nr_in_flight (C function)c.pwq_dec_nr_in_flighthNtauh1jhjhhhNhNubj)}(hhh](j)}(hOvoid pwq_dec_nr_in_flight (struct pool_workqueue *pwq, unsigned long work_data)h]j)}(hNvoid pwq_dec_nr_in_flight(struct pool_workqueue *pwq, unsigned long work_data)h](j)}(hvoidh]hvoid}(hjuhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjuhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjuhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjuhhhjuhMubj)}(hpwq_dec_nr_in_flighth]j)}(hpwq_dec_nr_in_flighth]hpwq_dec_nr_in_flight}(hjuhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjuubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjuhhhjuhMubj|)}(h5(struct pool_workqueue *pwq, unsigned long work_data)h](j)}(hstruct pool_workqueue *pwqh](j)}(hjh]hstruct}(hjuhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjuubj)}(h h]h }(hjuhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjuubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hjvhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjvubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjvmodnameN classnameNj7j:)}j=]j@)}j3jusbc.pwq_dec_nr_in_flightasbuh1hhjuubj)}(h h]h }(hj#vhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjuubjU)}(hjuh]h*}(hj1vhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjuubj)}(hpwqh]hpwq}(hj>vhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjuubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjuubj)}(hunsigned long work_datah](j)}(hunsignedh]hunsigned}(hjWvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjSvubj)}(h h]h }(hjevhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjSvubj)}(hlongh]hlong}(hjsvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjSvubj)}(h h]h }(hjvhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjSvubj)}(h work_datah]h work_data}(hjvhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjSvubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjuubeh}(h]h ]h"]h$]h&]jjuh1j{hjuhhhjuhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjuhhhjuhMubah}(h]juah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjuhMhjuhhubjC)}(hhh]h)}(hdecrement pwq's nr_in_flighth]hdecrement pwq’s nr_in_flight}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvhhubah}(h]h ]h"]h$]h&]uh1jBhjuhhhjuhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejvjfjvjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct pool_workqueue *pwq`` pwq of interest ``unsigned long work_data`` work_data of work which left the queue **Description** A work either has completed or is removed from pending queue, decrement nr_in_flight of its pwq and handle workqueue flushing. **NOTE** For unbound workqueues, this function may temporarily drop **pwq->pool->lock** and thus should be called after all other state updates for the in-flight work item is complete. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjvh]h Parameters}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvubjg)}(hhh](jl)}(h/``struct pool_workqueue *pwq`` pwq of interest h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjvh]hstruct pool_workqueue *pwq}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvubj)}(hhh]h)}(hpwq of interesth]hpwq of interest}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjwhMhjwubah}(h]h ]h"]h$]h&]uh1jhjvubeh}(h]h ]h"]h$]h&]uh1jkhjwhMhjvubjl)}(hC``unsigned long work_data`` work_data of work which left the queue h](jr)}(h``unsigned long work_data``h]j)}(hj3wh]hunsigned long work_data}(hj5whhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1wubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj-wubj)}(hhh]h)}(h&work_data of work which left the queueh]h&work_data of work which left the queue}(hjLwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjHwhMhjIwubah}(h]h ]h"]h$]h&]uh1jhj-wubeh}(h]h ]h"]h$]h&]uh1jkhjHwhMhjvubeh}(h]h ]h"]h$]h&]uh1jfhjvubh)}(h**Description**h]j)}(hjnwh]h Description}(hjpwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlwubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvubh)}(h~A work either has completed or is removed from pending queue, decrement nr_in_flight of its pwq and handle workqueue flushing.h]h~A work either has completed or is removed from pending queue, decrement nr_in_flight of its pwq and handle workqueue flushing.}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvubh)}(h**NOTE**h]j)}(hjwh]hNOTE}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvubh)}(hFor unbound workqueues, this function may temporarily drop **pwq->pool->lock** and thus should be called after all other state updates for the in-flight work item is complete.h](h;For unbound workqueues, this function may temporarily drop }(hjwhhhNhNubj)}(h**pwq->pool->lock**h]hpwq->pool->lock}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubha and thus should be called after all other state updates for the in-flight work item is complete.}(hjwhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvubh)}(h **Context**h]j)}(hjwh]hContext}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j try_to_grab_pending (C function)c.try_to_grab_pendinghNtauh1jhjhhhNhNubj)}(hhh](j)}(hXint try_to_grab_pending (struct work_struct *work, u32 cflags, unsigned long *irq_flags)h]j)}(hWint try_to_grab_pending(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hinth]hint}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj"xhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjxhhhj!xhMubj)}(htry_to_grab_pendingh]j)}(htry_to_grab_pendingh]htry_to_grab_pending}(hj4xhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj0xubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjxhhhj!xhMubj|)}(h@(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjPxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjLxubj)}(h h]h }(hj]xhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjLxubh)}(hhh]j)}(h work_structh]h work_struct}(hjnxhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjkxubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjpxmodnameN classnameNj7j:)}j=]j@)}j3j6xsbc.try_to_grab_pendingasbuh1hhjLxubj)}(h h]h }(hjxhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjLxubjU)}(hjuh]h*}(hjxhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjLxubj)}(hworkh]hwork}(hjxhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjLxubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjHxubj)}(h u32 cflagsh](h)}(hhh]j)}(hu32h]hu32}(hjxhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjxubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjxmodnameN classnameNj7j:)}j=]jxc.try_to_grab_pendingasbuh1hhjxubj)}(h h]h }(hjxhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjxubj)}(hcflagsh]hcflags}(hjxhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjxubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjHxubj)}(hunsigned long *irq_flagsh](j)}(hunsignedh]hunsigned}(hj yhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjyubj)}(h h]h }(hjyhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjyubj)}(hlongh]hlong}(hj&yhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjyubj)}(h h]h }(hj4yhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjyubjU)}(hjuh]h*}(hjByhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjyubj)}(h irq_flagsh]h irq_flags}(hjOyhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjyubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjHxubeh}(h]h ]h"]h$]h&]jjuh1j{hjxhhhj!xhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj xhhhj!xhMubah}(h]jxah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj!xhMhjxhhubjC)}(hhh]h)}(h-steal work item from worklist and disable irqh]h-steal work item from worklist and disable irq}(hjyyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvyhhubah}(h]h ]h"]h$]h&]uh1jBhjxhhhj!xhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejyjfjyjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct work_struct *work`` work item to steal ``u32 cflags`` ``WORK_CANCEL_`` flags ``unsigned long *irq_flags`` place to store irq state **Description** Try to grab PENDING bit of **work**. This function can handle **work** in any stable state - idle, on timer or on worklist. ======== ================================================================ 1 if **work** was pending and we successfully stole PENDING 0 if **work** was idle and we claimed PENDING -EAGAIN if PENDING couldn't be grabbed at the moment, safe to busy-retry ======== ================================================================ **Note** On >= 0 return, the caller owns **work**'s PENDING bit. To avoid getting interrupted while holding PENDING and **work** off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time. On successful return, >= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(***irq_flags**). This function is safe to call from any context including IRQ handler.h](h)}(h**Parameters**h]j)}(hjyh]h Parameters}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjyubjg)}(hhh](jl)}(h0``struct work_struct *work`` work item to steal h](jr)}(h``struct work_struct *work``h]j)}(hjyh]hstruct work_struct *work}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjyubj)}(hhh]h)}(hwork item to stealh]hwork item to steal}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjyhMhjyubah}(h]h ]h"]h$]h&]uh1jhjyubeh}(h]h ]h"]h$]h&]uh1jkhjyhMhjyubjl)}(h&``u32 cflags`` ``WORK_CANCEL_`` flags h](jr)}(h``u32 cflags``h]j)}(hjyh]h u32 cflags}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjyubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjyubj)}(hhh]h)}(h``WORK_CANCEL_`` flagsh](j)}(h``WORK_CANCEL_``h]h WORK_CANCEL_}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj zubh flags}(hj zhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjzhMhj zubah}(h]h ]h"]h$]h&]uh1jhjyubeh}(h]h ]h"]h$]h&]uh1jkhjzhMhjyubjl)}(h6``unsigned long *irq_flags`` place to store irq state h](jr)}(h``unsigned long *irq_flags``h]j)}(hj:zh]hunsigned long *irq_flags}(hj= 0 return, the caller owns **work**'s PENDING bit. To avoid getting interrupted while holding PENDING and **work** off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time.h](h On >= 0 return, the caller owns }(hj{hhhNhNubj)}(h**work**h]hwork}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubhJ’s PENDING bit. To avoid getting interrupted while holding PENDING and }(hj{hhhNhNubj)}(h**work**h]hwork}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubh off queue, irq must be disabled on entry. This, combined with delayed_work->timer being irqsafe, ensures that we return -EAGAIN for finite short period of time.}(hj{hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjyubh)}(hOn successful return, >= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(***irq_flags**).h](hsOn successful return, >= 0, irq is disabled and the caller is responsible for releasing it using local_irq_restore(}(hj|hhhNhNubj)}(h***irq_flags**h]h *irq_flags}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|ubh).}(hj|hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM$hjyubh)}(hEThis function is safe to call from any context including IRQ handler.h]hEThis function is safe to call from any context including IRQ handler.}(hj1|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM'hjyubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwork_grab_pending (C function)c.work_grab_pendinghNtauh1jhjhhhNhNubj)}(hhh](j)}(hWbool work_grab_pending (struct work_struct *work, u32 cflags, unsigned long *irq_flags)h]j)}(hVbool work_grab_pending(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hj*h]hbool}(hj`|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\|hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjn|hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj\|hhhjm|hMubj)}(hwork_grab_pendingh]j)}(hwork_grab_pendingh]hwork_grab_pending}(hj|hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj||ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj\|hhhjm|hMubj|)}(h@(struct work_struct *work, u32 cflags, unsigned long *irq_flags)h](j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj|ubj)}(h h]h }(hj|hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj|ubh)}(hhh]j)}(h work_structh]h work_struct}(hj|hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj|ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj|modnameN classnameNj7j:)}j=]j@)}j3j|sbc.work_grab_pendingasbuh1hhj|ubj)}(h h]h }(hj|hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj|ubjU)}(hjuh]h*}(hj|hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj|ubj)}(hworkh]hwork}(hj|hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj|ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj|ubj)}(h u32 cflagsh](h)}(hhh]j)}(hu32h]hu32}(hj}hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj}modnameN classnameNj7j:)}j=]j|c.work_grab_pendingasbuh1hhj }ubj)}(h h]h }(hj/}hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj }ubj)}(hcflagsh]hcflags}(hj=}hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj }ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj|ubj)}(hunsigned long *irq_flagsh](j)}(hunsignedh]hunsigned}(hjV}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjR}ubj)}(h h]h }(hjd}hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjR}ubj)}(hlongh]hlong}(hjr}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjR}ubj)}(h h]h }(hj}hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjR}ubjU)}(hjuh]h*}(hj}hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjR}ubj)}(h irq_flagsh]h irq_flags}(hj}hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjR}ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj|ubeh}(h]h ]h"]h$]h&]jjuh1j{hj\|hhhjm|hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjX|hhhjm|hMubah}(h]jS|ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjm|hMhjU|hhubjC)}(hhh]h)}(h-steal work item from worklist and disable irqh]h-steal work item from worklist and disable irq}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj}hhubah}(h]h ]h"]h$]h&]uh1jBhjU|hhhjm|hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej}jfj}jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct work_struct *work`` work item to steal ``u32 cflags`` ``WORK_CANCEL_`` flags ``unsigned long *irq_flags`` place to store IRQ state **Description** Grab PENDING bit of **work**. **work** can be in any stable state - idle, on timer or on worklist. Can be called from any context. IRQ is disabled on return with IRQ state stored in ***irq_flags**. The caller is responsible for re-enabling it using local_irq_restore(). Returns ``true`` if **work** was pending. ``false`` if idle.h](h)}(h**Parameters**h]j)}(hj}h]h Parameters}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj}ubjg)}(hhh](jl)}(h0``struct work_struct *work`` work item to steal h](jr)}(h``struct work_struct *work``h]j)}(hj~h]hstruct work_struct *work}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj~ubj)}(hhh]h)}(hwork item to stealh]hwork item to steal}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~hMhj~ubah}(h]h ]h"]h$]h&]uh1jhj~ubeh}(h]h ]h"]h$]h&]uh1jkhj~hMhj}ubjl)}(h&``u32 cflags`` ``WORK_CANCEL_`` flags h](jr)}(h``u32 cflags``h]j)}(hj?~h]h u32 cflags}(hjA~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=~ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj9~ubj)}(hhh]h)}(h``WORK_CANCEL_`` flagsh](j)}(h``WORK_CANCEL_``h]h WORK_CANCEL_}(hj\~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjX~ubh flags}(hjX~hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjT~hMhjU~ubah}(h]h ]h"]h$]h&]uh1jhj9~ubeh}(h]h ]h"]h$]h&]uh1jkhjT~hMhj}ubjl)}(h6``unsigned long *irq_flags`` place to store IRQ state h](jr)}(h``unsigned long *irq_flags``h]j)}(hj~h]hunsigned long *irq_flags}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj~ubj)}(hhh]h)}(hplace to store IRQ stateh]hplace to store IRQ state}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~hMhj~ubah}(h]h ]h"]h$]h&]uh1jhj~ubeh}(h]h ]h"]h$]h&]uh1jkhj~hMhj}ubeh}(h]h ]h"]h$]h&]uh1jfhj}ubh)}(h**Description**h]j)}(hj~h]h Description}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj}ubh)}(hbGrab PENDING bit of **work**. **work** can be in any stable state - idle, on timer or on worklist.h](hGrab PENDING bit of }(hj~hhhNhNubj)}(h**work**h]hwork}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubh. }(hj~hhhNhNubj)}(h**work**h]hwork}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubh< can be in any stable state - idle, on timer or on worklist.}(hj~hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj}ubh)}(hCan be called from any context. IRQ is disabled on return with IRQ state stored in ***irq_flags**. The caller is responsible for re-enabling it using local_irq_restore().h](hSCan be called from any context. IRQ is disabled on return with IRQ state stored in }(hj hhhNhNubj)}(h***irq_flags**h]h *irq_flags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhI. The caller is responsible for re-enabling it using local_irq_restore().}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj}ubh)}(hlock).h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh](jl)}(h7``struct pool_workqueue *pwq`` pwq **work** belongs to h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjh]hstruct pool_workqueue *pwq}(hjāhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hpwq **work** belongs toh](hpwq }(hjہhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjہubh belongs to}(hjہhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjׁhMhj؁ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjׁhMhjubjl)}(h,``struct work_struct *work`` work to insert h](jr)}(h``struct work_struct *work``h]j)}(hj h]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hwork to inserth]hwork to insert}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj"hMhj#ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj"hMhjubjl)}(h+``struct list_head *head`` insertion point h](jr)}(h``struct list_head *head``h]j)}(hjFh]hstruct list_head *head}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjDubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj@ubj)}(hhh]h)}(hinsertion pointh]hinsertion point}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj[hMhj\ubah}(h]h ]h"]h$]h&]uh1jhj@ubeh}(h]h ]h"]h$]h&]uh1jkhj[hMhjubjl)}(h>``unsigned int extra_flags`` extra WORK_STRUCT_* flags to set h](jr)}(h``unsigned int extra_flags``h]j)}(hjh]hunsigned int extra_flags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjyubj)}(hhh]h)}(h extra WORK_STRUCT_* flags to seth]h extra WORK_STRUCT_* flags to set}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjyubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hgInsert **work** which belongs to **pwq** after **head**. **extra_flags** is or'd to work_struct flags.h](hInsert }(hjЂhhhNhNubj)}(h**work**h]hwork}(hj؂hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjЂubh which belongs to }(hjЂhhhNhNubj)}(h**pwq**h]hpwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjЂubh after }(hjЂhhhNhNubj)}(h**head**h]hhead}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjЂubh. }(hjЂhhhNhNubj)}(h**extra_flags**h]h extra_flags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjЂubh is or’d to work_struct flags.}(hjЂhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h **Context**h]j)}(hj)h]hContext}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj'ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jqueue_work_on (C function)c.queue_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(hSbool queue_work_on (int cpu, struct workqueue_struct *wq, struct work_struct *work)h]j)}(hRbool queue_work_on(int cpu, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hj*h]hbool}(hjnhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMi ubj)}(h h]h }(hj|hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjjhhhj{hMi ubj)}(h queue_work_onh]j)}(h queue_work_onh]h queue_work_on}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjjhhhj{hMi ubj|)}(h@(int cpu, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hint cpuh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hjƃhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj߃hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjۃubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjۃubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.queue_work_onasbuh1hhjۃubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjۃubjU)}(hjuh]h*}(hj+hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjۃubj)}(hwqh]hwq}(hj8hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjۃubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMubj)}(h h]h }(hj^hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjMubh)}(hhh]j)}(h work_structh]h work_struct}(hjohhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjqmodnameN classnameNj7j:)}j=]jc.queue_work_onasbuh1hhjMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjMubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjMubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjMubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjjhhhj{hMi ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjfhhhj{hMi ubah}(h]jaah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj{hMi hjchhubjC)}(hhh]h)}(hqueue work on specific cpuh]hqueue work on specific cpu}(hj҄hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMi hjτhhubah}(h]h ]h"]h$]h&]uh1jBhjchhhj{hMi ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct work_struct *work`` work to queue **Description** We queue the work to a specific CPU, the caller must ensure it can't go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMm hjubjg)}(hhh](jl)}(h*``int cpu`` CPU number to execute work on h](jr)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMj hj ubj)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj(hMj hj)ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhj(hMj hj ubjl)}(h1``struct workqueue_struct *wq`` workqueue to use h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjLh]hstruct workqueue_struct *wq}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMk hjFubj)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhjahMk hjbubah}(h]h ]h"]h$]h&]uh1jhjFubeh}(h]h ]h"]h$]h&]uh1jkhjahMk hj ubjl)}(h+``struct work_struct *work`` work to queue h](jr)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMl hjubj)}(hhh]h)}(h work to queueh]h work to queue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMl hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMl hj ubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hj…hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMn hjubh)}(hXWe queue the work to a specific CPU, the caller must ensure it can't go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat.h]hXWe queue the work to a specific CPU, the caller must ensure it can’t go away. Callers that fail to ensure that the specified CPU cannot go away will execute on a randomly chosen CPU. But note well that callers specifying a CPU that never has been online will get a splat.}(hjօhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMm hjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMs hjubh)}(hA``false`` if **work** was already on a queue, ``true`` otherwise.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was already on a queue, }(hjhhhNhNubj)}(h``true``h]htrue}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMt hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!select_numa_node_cpu (C function)c.select_numa_node_cpuhNtauh1jhjhhhNhNubj)}(hhh](j)}(h#int select_numa_node_cpu (int node)h]j)}(h"int select_numa_node_cpu(int node)h](j)}(hinth]hint}(hj^hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjZhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjmhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjZhhhjlhM ubj)}(hselect_numa_node_cpuh]j)}(hselect_numa_node_cpuh]hselect_numa_node_cpu}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj{ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjZhhhjlhM ubj|)}(h (int node)h]j)}(hint nodeh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hnodeh]hnode}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjZhhhjlhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjVhhhjlhM ubah}(h]jQah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjlhM hjShhubjC)}(hhh]h)}(hSelect a CPU based on NUMA nodeh]hSelect a CPU based on NUMA node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjކhhubah}(h]h ]h"]h$]h&]uh1jBhjShhhjlhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX\**Parameters** ``int node`` NUMA node ID that we want to select a CPU from **Description** This function will attempt to find a "random" cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubjg)}(hhh]jl)}(h<``int node`` NUMA node ID that we want to select a CPU from h](jr)}(h ``int node``h]j)}(hj"h]hint node}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(h.NUMA node ID that we want to select a CPU fromh]h.NUMA node ID that we want to select a CPU from}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj7hM hj8ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj7hM hjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj]h]h Description}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(hThis function will attempt to find a "random" cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.h]hXThis function will attempt to find a “random” cpu available on a given node. If there are no CPUs available on the given node it will return WORK_CPU_UNBOUND indicating that we should just schedule to any available CPU if we need to schedule this work.}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jqueue_work_node (C function)c.queue_work_nodehNtauh1jhjhhhNhNubj)}(hhh](j)}(hVbool queue_work_node (int node, struct workqueue_struct *wq, struct work_struct *work)h]j)}(hUbool queue_work_node(int node, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM ubj)}(hqueue_work_nodeh]j)}(hqueue_work_nodeh]hqueue_work_node}(hj‡hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM ubj|)}(hA(int node, struct workqueue_struct *wq, struct work_struct *work)h](j)}(hint nodeh](j)}(hinth]hint}(hjއhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjڇubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjڇubj)}(hnodeh]hnode}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjڇubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjևubj)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj1hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj3modnameN classnameNj7j:)}j=]j@)}j3jćsbc.queue_work_nodeasbuh1hhjubj)}(h h]h }(hjQhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj_hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjlhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjևubj)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]jMc.queue_work_nodeasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjψhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hworkh]hwork}(hj܈hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjևubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM hjhhubjC)}(hhh]h)}(h2queue work on a "random" cpu for a given NUMA nodeh]h6queue work on a “random” cpu for a given NUMA node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hXH**Parameters** ``int node`` NUMA node that we are targeting the work for ``struct workqueue_struct *wq`` workqueue to use ``struct work_struct *work`` work to queue **Description** We queue the work to a "random" CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node. This function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior. Currently the "random" CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise.h](h)}(h**Parameters**h]j)}(hj(h]h Parameters}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj"ubjg)}(hhh](jl)}(h:``int node`` NUMA node that we are targeting the work for h](jr)}(h ``int node``h]j)}(hjGh]hint node}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjAubj)}(hhh]h)}(h,NUMA node that we are targeting the work forh]h,NUMA node that we are targeting the work for}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj\hM hj]ubah}(h]h ]h"]h$]h&]uh1jhjAubeh}(h]h ]h"]h$]h&]uh1jkhj\hM hj>ubjl)}(h1``struct workqueue_struct *wq`` workqueue to use h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjzubj)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjzubeh}(h]h ]h"]h$]h&]uh1jkhjhM hj>ubjl)}(h+``struct work_struct *work`` work to queue h](jr)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(h work to queueh]h work to queue}(hj҉hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjΉhM hjωubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjΉhM hj>ubeh}(h]h ]h"]h$]h&]uh1jfhj"ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj"ubh)}(hWe queue the work to a "random" CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node.h]hWe queue the work to a “random” CPU within a given NUMA node. The basic idea here is to provide a way to somehow associate work with a given NUMA node.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj"ubh)}(hThis function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior.h]hThis function will only make a best effort attempt at getting this onto the right NUMA node. If no node is requested or the requested node is offline then we just fall back to standard queue_work behavior.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj"ubh)}(hCurrently the "random" CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU.h]hCurrently the “random” CPU ends up being the first available CPU in the intersection of cpu_online_mask and the cpumask of the node, unless we are running on the node. In that case we just use the current CPU.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj"ubh)}(h **Return**h]j)}(hj9h]hReturn}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj"ubh)}(hA``false`` if **work** was already on a queue, ``true`` otherwise.h](j)}(h ``false``h]hfalse}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh if }(hjOhhhNhNubj)}(h**work**h]hwork}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh was already on a queue, }(hjOhhhNhNubj)}(h``true``h]htrue}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh otherwise.}(hjOhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj"ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"queue_delayed_work_on (C function)c.queue_delayed_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(hrbool queue_delayed_work_on (int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]j)}(hqbool queue_delayed_work_on(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM ubj)}(hqueue_delayed_work_onh]j)}(hqueue_delayed_work_onh]hqueue_delayed_work_on}(hjЊhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj̊ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM ubj|)}(hW(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hint cpuh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj!hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hj.hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj?hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjAmodnameN classnameNj7j:)}j=]j@)}j3jҊsbc.queue_delayed_work_onasbuh1hhjubj)}(h h]h }(hj_hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjmhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjzhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j[c.queue_delayed_work_onasbuh1hhjubj)}(h h]h }(hjϋhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj݋hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hj-hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hdelayh]hdelay}(hj;hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM hjhhubjC)}(hhh]h)}(h&queue work on specific CPU after delayh]h&queue work on specific CPU after delay}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjbhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej}jfj}jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** We queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can't go away. Callers that fail to ensure this, may get **dwork->timer** queued to an offlined CPU and this will prevent queueing of **dwork->work** unless the offlined CPU becomes online again. **Return** ``false`` if **work** was already on a queue, ``true`` otherwise. If **delay** is zero and **dwork** is idle, it will be scheduled for immediate execution.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubjg)}(hhh](jl)}(h*``int cpu`` CPU number to execute work on h](jr)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjubjl)}(h1``struct workqueue_struct *wq`` workqueue to use h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjߌh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj݌ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjٌubj)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjٌubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjubjl)}(h-``struct delayed_work *dwork`` work to queue h](jr)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(h work to queueh]h work to queue}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj-hM hj.ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj-hM hjubjl)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](jr)}(h``unsigned long delay``h]j)}(hjQh]hunsigned long delay}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjKubj)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjfhM hjgubah}(h]h ]h"]h$]h&]uh1jhjKubeh}(h]h ]h"]h$]h&]uh1jkhjfhM hjubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(hX,We queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can't go away. Callers that fail to ensure this, may get **dwork->timer** queued to an offlined CPU and this will prevent queueing of **dwork->work** unless the offlined CPU becomes online again.h](hWe queue the delayed_work to a specific CPU, for non-zero delays the caller must ensure it is online and can’t go away. Callers that fail to ensure this, may get }(hjhhhNhNubj)}(h**dwork->timer**h]h dwork->timer}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh= queued to an offlined CPU and this will prevent queueing of }(hjhhhNhNubj)}(h**dwork->work**h]h dwork->work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. unless the offlined CPU becomes online again.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hj׍h]hReturn}(hjٍhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjՍubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(h``false`` if **work** was already on a queue, ``true`` otherwise. If **delay** is zero and **dwork** is idle, it will be scheduled for immediate execution.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was already on a queue, }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise. If }(hjhhhNhNubj)}(h **delay**h]hdelay}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is zero and }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh7 is idle, it will be scheduled for immediate execution.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j mod_delayed_work_on (C function)c.mod_delayed_work_onhNtauh1jhjhhhNhNubj)}(hhh](j)}(hpbool mod_delayed_work_on (int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h]j)}(hobool mod_delayed_work_on(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hj*h]hbool}(hjrhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjnhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM2 ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjnhhhjhM2 ubj)}(hmod_delayed_work_onh]j)}(hmod_delayed_work_onh]hmod_delayed_work_on}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjnhhhjhM2 ubj|)}(hW(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay)h](j)}(hint cpuh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hjʎhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjߎubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjߎubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.mod_delayed_work_onasbuh1hhjߎubj)}(h h]h }(hj!hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjߎubjU)}(hjuh]h*}(hj/hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjߎubj)}(hwqh]hwq}(hj<hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjߎubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hjUhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQubj)}(h h]h }(hjbhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjQubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjshhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjumodnameN classnameNj7j:)}j=]jc.mod_delayed_work_onasbuh1hhjQubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjQubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjQubj)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjQubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hunsigned long delayh](j)}(hunsignedh]hunsigned}(hjŏhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjӏhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hdelayh]hdelay}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjnhhhjhM2 ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjjhhhjhM2 ubah}(h]jeah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM2 hjghhubjC)}(hhh]h)}(h7modify delay of or queue a delayed work on specific CPUh]h7modify delay of or queue a delayed work on specific CPU}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM2 hj$hhubah}(h]h ]h"]h$]h&]uh1jBhjghhhjhM2 ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej?jfj?jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``int cpu`` CPU number to execute work on ``struct workqueue_struct *wq`` workqueue to use ``struct delayed_work *dwork`` work to queue ``unsigned long delay`` number of jiffies to wait before queueing **Description** If **dwork** is idle, equivalent to queue_delayed_work_on(); otherwise, modify **dwork**'s timer so that it expires after **delay**. If **delay** is zero, **work** is guaranteed to be scheduled immediately regardless of its current state. This function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details. **Return** ``false`` if **dwork** was idle and queued, ``true`` if **dwork** was pending and its timer was modified.h](h)}(h**Parameters**h]j)}(hjIh]h Parameters}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjGubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM6 hjCubjg)}(hhh](jl)}(h*``int cpu`` CPU number to execute work on h](jr)}(h ``int cpu``h]j)}(hjhh]hint cpu}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM3 hjbubj)}(hhh]h)}(hCPU number to execute work onh]hCPU number to execute work on}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hM3 hj~ubah}(h]h ]h"]h$]h&]uh1jhjbubeh}(h]h ]h"]h$]h&]uh1jkhj}hM3 hj_ubjl)}(h1``struct workqueue_struct *wq`` workqueue to use h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM4 hjubj)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM4 hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM4 hj_ubjl)}(h-``struct delayed_work *dwork`` work to queue h](jr)}(h``struct delayed_work *dwork``h]j)}(hjڐh]hstruct delayed_work *dwork}(hjܐhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjؐubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM5 hjԐubj)}(hhh]h)}(h work to queueh]h work to queue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM5 hjubah}(h]h ]h"]h$]h&]uh1jhjԐubeh}(h]h ]h"]h$]h&]uh1jkhjhM5 hj_ubjl)}(hB``unsigned long delay`` number of jiffies to wait before queueing h](jr)}(h``unsigned long delay``h]j)}(hjh]hunsigned long delay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM6 hj ubj)}(hhh]h)}(h)number of jiffies to wait before queueingh]h)number of jiffies to wait before queueing}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj(hM6 hj)ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhj(hM6 hj_ubeh}(h]h ]h"]h$]h&]uh1jfhjCubh)}(h**Description**h]j)}(hjNh]h Description}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM8 hjCubh)}(hIf **dwork** is idle, equivalent to queue_delayed_work_on(); otherwise, modify **dwork**'s timer so that it expires after **delay**. If **delay** is zero, **work** is guaranteed to be scheduled immediately regardless of its current state.h](hIf }(hjdhhhNhNubj)}(h **dwork**h]hdwork}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubhC is idle, equivalent to queue_delayed_work_on(); otherwise, modify }(hjdhhhNhNubj)}(h **dwork**h]hdwork}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubh$’s timer so that it expires after }(hjdhhhNhNubj)}(h **delay**h]hdelay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubh. If }(hjdhhhNhNubj)}(h **delay**h]hdelay}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubh is zero, }(hjdhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubhK is guaranteed to be scheduled immediately regardless of its current state.}(hjdhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM7 hjCubh)}(hlThis function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details.h]hlThis function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details.}(hj͑hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM= hjCubh)}(h **Return**h]j)}(hjޑh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjܑubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM@ hjCubh)}(hi``false`` if **dwork** was idle and queued, ``true`` if **dwork** was pending and its timer was modified.h](j)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was idle and queued, }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }hjsbj)}(h **dwork**h]hdwork}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh( was pending and its timer was modified.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM= hjCubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jqueue_rcu_work (C function)c.queue_rcu_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hIbool queue_rcu_work (struct workqueue_struct *wq, struct rcu_work *rwork)h]j)}(hHbool queue_rcu_work(struct workqueue_struct *wq, struct rcu_work *rwork)h](j)}(hj*h]hbool}(hjghhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjchhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM^ ubj)}(h h]h }(hjuhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjchhhjthM^ ubj)}(hqueue_rcu_workh]j)}(hqueue_rcu_workh]hqueue_rcu_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjchhhjthM^ ubj|)}(h5(struct workqueue_struct *wq, struct rcu_work *rwork)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjÒmodnameN classnameNj7j:)}j=]j@)}j3jsbc.queue_rcu_workasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct rcu_work *rworkh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hj"hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hrcu_workh]hrcu_work}(hj3hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj5modnameN classnameNj7j:)}j=]jݒc.queue_rcu_workasbuh1hhjubj)}(h h]h }(hjQhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj_hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hrworkh]hrwork}(hjlhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjchhhjthM^ ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj_hhhjthM^ ubah}(h]jZah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjthM^ hj\hhubjC)}(hhh]h)}(h#queue work after a RCU grace periodh]h#queue work after a RCU grace period}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM^ hjhhubah}(h]h ]h"]h$]h&]uh1jBhj\hhhjthM^ ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue to use ``struct rcu_work *rwork`` work to queue **Return** ``false`` if **rwork** was already pending, ``true`` otherwise. Note that a full RCU grace period is guaranteed only after a ``true`` return. While **rwork** is guaranteed to be executed after a ``false`` return, the execution may happen before a full RCU grace period has passed.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMb hjubjg)}(hhh](jl)}(h1``struct workqueue_struct *wq`` workqueue to use h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjדh]hstruct workqueue_struct *wq}(hjٓhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjՓubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM_ hjѓubj)}(hhh]h)}(hworkqueue to useh]hworkqueue to use}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM_ hjubah}(h]h ]h"]h$]h&]uh1jhjѓubeh}(h]h ]h"]h$]h&]uh1jkhjhM_ hjΓubjl)}(h)``struct rcu_work *rwork`` work to queue h](jr)}(h``struct rcu_work *rwork``h]j)}(hjh]hstruct rcu_work *rwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM` hj ubj)}(hhh]h)}(h work to queueh]h work to queue}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hM` hj&ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhj%hM` hjΓubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h **Return**h]j)}(hjKh]hReturn}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMb hjubh)}(hX``false`` if **rwork** was already pending, ``true`` otherwise. Note that a full RCU grace period is guaranteed only after a ``true`` return. While **rwork** is guaranteed to be executed after a ``false`` return, the execution may happen before a full RCU grace period has passed.h](j)}(h ``false``h]hfalse}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubh if }(hjahhhNhNubj)}(h **rwork**h]hrwork}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubh was already pending, }(hjahhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubhJ otherwise. Note that a full RCU grace period is guaranteed only after a }(hjahhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubh return. While }(hjahhhNhNubj)}(h **rwork**h]hrwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubh& is guaranteed to be executed after a }(hjahhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubhL return, the execution may happen before a full RCU grace period has passed.}(hjahhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMb hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"worker_attach_to_pool (C function)c.worker_attach_to_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(hLvoid worker_attach_to_pool (struct worker *worker, struct worker_pool *pool)h]j)}(hKvoid worker_attach_to_pool(struct worker *worker, struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM ubj)}(hworker_attach_to_poolh]j)}(hworker_attach_to_poolh]hworker_attach_to_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM ubj|)}(h1(struct worker *worker, struct worker_pool *pool)h](j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1ubj)}(h h]h }(hjBhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1ubh)}(hhh]j)}(hworkerh]hworker}(hjShhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjUmodnameN classnameNj7j:)}j=]j@)}j3jsbc.worker_attach_to_poolasbuh1hhj1ubj)}(h h]h }(hjshhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj1ubj)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj1ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj-ubj)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjŕhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj•ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjǕmodnameN classnameNj7j:)}j=]joc.worker_attach_to_poolasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj-ubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM hjhhubjC)}(hhh]h)}(hattach a worker to a poolh]hattach a worker to a pool}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj%hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej@jfj@jgjhjiuh1jhhhjhNhNubjk)}(hX(**Parameters** ``struct worker *worker`` worker to be attached ``struct worker_pool *pool`` the target pool **Description** Attach **worker** to **pool**. Once attached, the ``WORKER_UNBOUND`` flag and cpu-binding of **worker** are kept coordinated with the pool across cpu-[un]hotplugs.h](h)}(h**Parameters**h]j)}(hjJh]h Parameters}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjDubjg)}(hhh](jl)}(h0``struct worker *worker`` worker to be attached h](jr)}(h``struct worker *worker``h]j)}(hjih]hstruct worker *worker}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjcubj)}(hhh]h)}(hworker to be attachedh]hworker to be attached}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~hM hjubah}(h]h ]h"]h$]h&]uh1jhjcubeh}(h]h ]h"]h$]h&]uh1jkhj~hM hj`ubjl)}(h-``struct worker_pool *pool`` the target pool h](jr)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(hthe target poolh]hthe target pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM hj`ubeh}(h]h ]h"]h$]h&]uh1jfhjDubh)}(h**Description**h]j)}(hjݖh]h Description}(hjߖhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjۖubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjDubh)}(hAttach **worker** to **pool**. Once attached, the ``WORKER_UNBOUND`` flag and cpu-binding of **worker** are kept coordinated with the pool across cpu-[un]hotplugs.h](hAttach }(hjhhhNhNubj)}(h **worker**h]hworker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to }(hjhhhNhNubj)}(h**pool**h]hpool}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. Once attached, the }(hjhhhNhNubj)}(h``WORKER_UNBOUND``h]hWORKER_UNBOUND}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh flag and cpu-binding of }(hjhhhNhNubj)}(h **worker**h]hworker}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh< are kept coordinated with the pool across cpu-[un]hotplugs.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjDubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$worker_detach_from_pool (C function)c.worker_detach_from_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void worker_detach_from_pool (struct worker *worker)h]j)}(h3void worker_detach_from_pool(struct worker *worker)h](j)}(hvoidh]hvoid}(hjjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjfhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjyhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjfhhhjxhM ubj)}(hworker_detach_from_poolh]j)}(hworker_detach_from_poolh]hworker_detach_from_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjfhhhjxhM ubj|)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkerh]hworker}(hjŗhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj—ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjǗmodnameN classnameNj7j:)}j=]j@)}j3jsbc.worker_detach_from_poolasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjfhhhjxhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjbhhhjxhM ubah}(h]j]ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjxhM hj_hhubjC)}(hhh]h)}(hdetach a worker from its poolh]hdetach a worker from its pool}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj'hhubah}(h]h ]h"]h$]h&]uh1jBhj_hhhjxhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejBjfjBjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct worker *worker`` worker which is attached to its pool **Description** Undo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn't access to the pool after detached except it has other reference to the pool.h](h)}(h**Parameters**h]j)}(hjLh]h Parameters}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjFubjg)}(hhh]jl)}(h?``struct worker *worker`` worker which is attached to its pool h](jr)}(h``struct worker *worker``h]j)}(hjkh]hstruct worker *worker}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjiubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjeubj)}(hhh]h)}(h$worker which is attached to its poolh]h$worker which is attached to its pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjeubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjbubah}(h]h ]h"]h$]h&]uh1jfhjFubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjFubh)}(hUndo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn't access to the pool after detached except it has other reference to the pool.h]hUndo the attaching which had been done in worker_attach_to_pool(). The caller worker shouldn’t access to the pool after detached except it has other reference to the pool.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjFubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jcreate_worker (C function)c.create_workerhNtauh1jhjhhhNhNubj)}(hhh](j)}(h8struct worker * create_worker (struct worker_pool *pool)h]j)}(h6struct worker *create_worker(struct worker_pool *pool)h](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM ubh)}(hhh]j)}(hworkerh]hworker}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj modnameN classnameNj7j:)}j=]j@)}j3 create_workersbc.create_workerasbuh1hhjhhhjhM ubj)}(h h]h }(hj+hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM ubjU)}(hjuh]h*}(hj9hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjhhhjhM ubj)}(h create_workerh]j)}(hj(h]h create_worker}(hjJhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjFubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM ubj|)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjaubj)}(h h]h }(hjrhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjaubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j&c.create_workerasbuh1hhjaubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjaubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjaubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjaubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj]ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM ubah}(h]jޘah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM hjhhubjC)}(hhh]h)}(hcreate a new workqueue workerh]hcreate a new workqueue worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX **Parameters** ``struct worker_pool *pool`` pool the new worker will belong to **Description** Create and start a new worker which is attached to **pool**. **Context** Might sleep. Does GFP_KERNEL allocations. **Return** Pointer to the newly created worker.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubjg)}(hhh]jl)}(h@``struct worker_pool *pool`` pool the new worker will belong to h](jr)}(h``struct worker_pool *pool``h]j)}(hj'h]hstruct worker_pool *pool}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj!ubj)}(hhh]h)}(h"pool the new worker will belong toh]h"pool the new worker will belong to}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj<hM hj=ubah}(h]h ]h"]h$]h&]uh1jhj!ubeh}(h]h ]h"]h$]h&]uh1jkhj<hM hjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjbh]h Description}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj`ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(hidle_list and into list **Description** Tag **worker** for destruction and adjust **pool** stats accordingly. The worker should be idle. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjYh]h Parameters}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjWubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM] hjSubjg)}(hhh](jl)}(h1``struct worker *worker`` worker to be destroyed h](jr)}(h``struct worker *worker``h]j)}(hjxh]hstruct worker *worker}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMZ hjrubj)}(hhh]h)}(hworker to be destroyedh]hworker to be destroyed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMZ hjubah}(h]h ]h"]h$]h&]uh1jhjrubeh}(h]h ]h"]h$]h&]uh1jkhjhMZ hjoubjl)}(hW``struct list_head *list`` transfer worker away from its pool->idle_list and into list h](jr)}(h``struct list_head *list``h]j)}(hjh]hstruct list_head *list}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM[ hjubj)}(hhh]h)}(h;transfer worker away from its pool->idle_list and into listh]h;transfer worker away from its pool->idle_list and into list}(hjʜhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjƜhM[ hjǜubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjƜhM[ hjoubeh}(h]h ]h"]h$]h&]uh1jfhjSubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM] hjSubh)}(haTag **worker** for destruction and adjust **pool** stats accordingly. The worker should be idle.h](hTag }(hjhhhNhNubj)}(h **worker**h]hworker}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh for destruction and adjust }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh/ stats accordingly. The worker should be idle.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM\ hjSubh)}(h **Context**h]j)}(hj7h]hContext}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj5ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM_ hjSubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM` hjSubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j idle_worker_timeout (C function)c.idle_worker_timeouthNtauh1jhjhhhNhNubj)}(hhh](j)}(h/void idle_worker_timeout (struct timer_list *t)h]j)}(h.void idle_worker_timeout(struct timer_list *t)h](j)}(hvoidh]hvoid}(hj|hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM| ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjxhhhjhM| ubj)}(hidle_worker_timeouth]j)}(hidle_worker_timeouth]hidle_worker_timeout}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjxhhhjhM| ubj|)}(h(struct timer_list *t)h]j)}(hstruct timer_list *th](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjƝhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h timer_listh]h timer_list}(hjםhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjԝubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjٝmodnameN classnameNj7j:)}j=]j@)}j3jsbc.idle_worker_timeoutasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hth]ht}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjxhhhjhM| ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjthhhjhM| ubah}(h]joah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM| hjqhhubjC)}(hhh]h)}(h.check if some idle workers can now be deleted.h]h.check if some idle workers can now be deleted.}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM| hj9hhubah}(h]h ]h"]h$]h&]uh1jBhjqhhhjhM| ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejTjfjTjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct timer_list *t`` The pool's idle_timer that just expired **Description** The timer is armed in worker_enter_idle(). Note that it isn't disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.h](h)}(h**Parameters**h]j)}(hj^h]h Parameters}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjXubjg)}(hhh]jl)}(hA``struct timer_list *t`` The pool's idle_timer that just expired h](jr)}(h``struct timer_list *t``h]j)}(hj}h]hstruct timer_list *t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM} hjwubj)}(hhh]h)}(h'The pool's idle_timer that just expiredh]h)The pool’s idle_timer that just expired}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM} hjubah}(h]h ]h"]h$]h&]uh1jhjwubeh}(h]h ]h"]h$]h&]uh1jkhjhM} hjtubah}(h]h ]h"]h$]h&]uh1jfhjXubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjXubh)}(hXZThe timer is armed in worker_enter_idle(). Note that it isn't disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.h]hX\The timer is armed in worker_enter_idle(). Note that it isn’t disarmed in worker_leave_idle(), as a worker flicking between idle and active while its pool is at the too_many_workers() tipping point would cause too much timer housekeeping overhead. Since IDLE_WORKER_TIMEOUT is long enough, we just let it expire and re-evaluate things from there.}(hjΞhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM~ hjXubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jidle_cull_fn (C function)c.idle_cull_fnhNtauh1jhjhhhNhNubj)}(hhh](j)}(h,void idle_cull_fn (struct work_struct *work)h]j)}(h+void idle_cull_fn(struct work_struct *work)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhj hM ubj)}(h idle_cull_fnh]j)}(h idle_cull_fnh]h idle_cull_fn}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhj hM ubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj6ubj)}(h h]h }(hjGhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj6ubh)}(hhh]j)}(h work_structh]h work_struct}(hjXhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjUubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjZmodnameN classnameNj7j:)}j=]j@)}j3j sbc.idle_cull_fnasbuh1hhj6ubj)}(h h]h }(hjxhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj6ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj6ubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj6ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj2ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhj hM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhj hM ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj hM hjhhubjC)}(hhh]h)}(h.cull workers that have been idle for too long.h]h.cull workers that have been idle for too long.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhj hM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej՟jfj՟jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct work_struct *work`` the pool's work for handling these idle workers **Description** This goes through a pool's idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds. We don't want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.h](h)}(h**Parameters**h]j)}(hjߟh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjݟubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjٟubjg)}(hhh]jl)}(hM``struct work_struct *work`` the pool's work for handling these idle workers h](jr)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(h/the pool's work for handling these idle workersh]h1the pool’s work for handling these idle workers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjubah}(h]h ]h"]h$]h&]uh1jfhjٟubh)}(h**Description**h]j)}(hj9h]h Description}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjٟubh)}(h{This goes through a pool's idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds.h]h}This goes through a pool’s idle workers and gets rid of those that have been idle for at least IDLE_WORKER_TIMEOUT seconds.}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjٟubh)}(hWe don't want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.h]hWe don’t want to disturb isolated CPUs because of a pcpu kworker being culled, so this also resets worker affinity. This requires a sleepable context, hence the split between timer callback and work item.}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjٟubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j maybe_create_worker (C function)c.maybe_create_workerhNtauh1jhjhhhNhNubj)}(hhh](j)}(h3void maybe_create_worker (struct worker_pool *pool)h]j)}(h2void maybe_create_worker(struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM ubj)}(hmaybe_create_workerh]j)}(hmaybe_create_workerh]hmaybe_create_worker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM ubj|)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjʠhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjƠubj)}(h h]h }(hjנhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjƠubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.maybe_create_workerasbuh1hhjƠubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjƠubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjƠubj)}(hpoolh]hpool}(hj#hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjƠubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM hjhhubjC)}(hhh]h)}(h create a new worker if necessaryh]h create a new worker if necessary}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjJhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejejfjejgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct worker_pool *pool`` pool to create a new worker for **Description** Create a new worker for **pool** if necessary. **pool** is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on **pool** to resolve possible allocation deadlock. On return, need_to_create_worker() is guaranteed to be ``false`` and may_start_working() ``true``. LOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.h](h)}(h**Parameters**h]j)}(hjoh]h Parameters}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjmubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjiubjg)}(hhh]jl)}(h=``struct worker_pool *pool`` pool to create a new worker for h](jr)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(hpool to create a new worker forh]hpool to create a new worker for}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjubah}(h]h ]h"]h$]h&]uh1jfhjiubh)}(h**Description**h]j)}(hjɡh]h Description}(hjˡhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjǡubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjiubh)}(hX+Create a new worker for **pool** if necessary. **pool** is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on **pool** to resolve possible allocation deadlock.h](hCreate a new worker for }(hjߡhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjߡubh if necessary. }(hjߡhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjߡubh is guaranteed to have at least one idle worker on return from this function. If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is sent to all rescuers with works scheduled on }(hjߡhhhNhNubj)}(h**pool**h]hpool}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjߡubh) to resolve possible allocation deadlock.}(hjߡhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjiubh)}(hbOn return, need_to_create_worker() is guaranteed to be ``false`` and may_start_working() ``true``.h](h7On return, need_to_create_worker() is guaranteed to be }(hj$hhhNhNubj)}(h ``false``h]hfalse}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$ubh and may_start_working() }(hj$hhhNhNubj)}(h``true``h]htrue}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$ubh.}(hj$hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjiubh)}(hLOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.h]hLOCKING: raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. Called only from manager.}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjiubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jmanage_workers (C function)c.manage_workershNtauh1jhjhhhNhNubj)}(hhh](j)}(h+bool manage_workers (struct worker *worker)h]j)}(h*bool manage_workers(struct worker *worker)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMK ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMK ubj)}(hmanage_workersh]j)}(hmanage_workersh]hmanage_workers}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMK ubj|)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hj¢hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjϢhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjݢubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.manage_workersasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMK ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj~hhhjhMK ubah}(h]jyah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMK hj{hhubjC)}(hhh]h)}(hmanage worker poolh]hmanage worker pool}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMK hjBhhubah}(h]h ]h"]h$]h&]uh1jBhj{hhhjhMK ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej]jfj]jgjhjiuh1jhhhjhNhNubjk)}(hX)**Parameters** ``struct worker *worker`` self **Description** Assume the manager role and manage the worker pool **worker** belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function. The caller can safely start processing works on false return. On true return, it's guaranteed that need_to_create_worker() is false and may_start_working() is true. **Context** raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations. **Return** ``false`` if the pool doesn't need management and the caller can safely start processing works, ``true`` if management function was performed and the conditions that the caller verified before calling the function may no longer be true.h](h)}(h**Parameters**h]j)}(hjgh]h Parameters}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjeubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMO hjaubjg)}(hhh]jl)}(h``struct worker *worker`` self h](jr)}(h``struct worker *worker``h]j)}(hjh]hstruct worker *worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chML hjubj)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhML hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhML hj}ubah}(h]h ]h"]h$]h&]uh1jfhjaubh)}(h**Description**h]j)}(hjh]h Description}(hjãhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMN hjaubh)}(hAssume the manager role and manage the worker pool **worker** belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function.h](h3Assume the manager role and manage the worker pool }(hjףhhhNhNubj)}(h **worker**h]hworker}(hjߣhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjףubh belongs to. At any given time, there can be only zero or one manager per pool. The exclusion is handled automatically by this function.}(hjףhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMM hjaubh)}(hThe caller can safely start processing works on false return. On true return, it's guaranteed that need_to_create_worker() is false and may_start_working() is true.h]hThe caller can safely start processing works on false return. On true return, it’s guaranteed that need_to_create_worker() is false and may_start_working() is true.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMQ hjaubh)}(h **Context**h]j)}(hj h]hContext}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMU hjaubh)}(horaw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations.h]horaw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times. Does GFP_KERNEL allocations.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMV hjaubh)}(h **Return**h]j)}(hj0h]hReturn}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMY hjaubh)}(h``false`` if the pool doesn't need management and the caller can safely start processing works, ``true`` if management function was performed and the conditions that the caller verified before calling the function may no longer be true.h](j)}(h ``false``h]hfalse}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubhY if the pool doesn’t need management and the caller can safely start processing works, }(hjFhhhNhNubj)}(h``true``h]htrue}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh if management function was performed and the conditions that the caller verified before calling the function may no longer be true.}(hjFhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMZ hjaubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jprocess_one_work (C function)c.process_one_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(hGvoid process_one_work (struct worker *worker, struct work_struct *work)h]j)}(hFvoid process_one_work(struct worker *worker, struct work_struct *work)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMs ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMs ubj)}(hprocess_one_workh]j)}(hprocess_one_workh]hprocess_one_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMs ubj|)}(h1(struct worker *worker, struct work_struct *work)h](j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hjҤhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjΤubj)}(h h]h }(hjߤhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjΤubh)}(hhh]j)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.process_one_workasbuh1hhjΤubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjΤubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjΤubj)}(hworkerh]hworker}(hj+hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjΤubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjʤubj)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@ubj)}(h h]h }(hjQhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj@ubh)}(hhh]j)}(h work_structh]h work_struct}(hjbhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj_ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjdmodnameN classnameNj7j:)}j=]j c.process_one_workasbuh1hhj@ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj@ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj@ubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj@ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjʤubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMs ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMs ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMs hjhhubjC)}(hhh]h)}(hprocess single workh]hprocess single work}(hjťhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMs hj¥hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMs ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejݥjfjݥjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct worker *worker`` self ``struct work_struct *work`` work to process **Description** Process **work**. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work. **Context** raw_spin_lock_irq(pool->lock) which is released and regrabbed.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMw hjubjg)}(hhh](jl)}(h``struct worker *worker`` self h](jr)}(h``struct worker *worker``h]j)}(hjh]hstruct worker *worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMt hjubj)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMt hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMt hjubjl)}(h-``struct work_struct *work`` work to process h](jr)}(h``struct work_struct *work``h]j)}(hj?h]hstruct work_struct *work}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMu hj9ubj)}(hhh]h)}(hwork to processh]hwork to process}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjThMu hjUubah}(h]h ]h"]h$]h&]uh1jhj9ubeh}(h]h ]h"]h$]h&]uh1jkhjThMu hjubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjzh]h Description}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMw hjubh)}(hX%Process **work**. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work.h](hProcess }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhX. This function contains all the logics necessary to process a single work including synchronization against and interaction with other workers on the same cpu, queueing and flushing. As long as context requirement is met, any worker can call this function to process a work.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMv hjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM| hjubh)}(h>raw_spin_lock_irq(pool->lock) which is released and regrabbed.h]h>raw_spin_lock_irq(pool->lock) which is released and regrabbed.}(hjɦhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM} hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$process_scheduled_works (C function)c.process_scheduled_workshNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void process_scheduled_works (struct worker *worker)h]j)}(h3void process_scheduled_works(struct worker *worker)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM# ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM# ubj)}(hprocess_scheduled_worksh]j)}(hprocess_scheduled_worksh]hprocess_scheduled_works}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM# ubj|)}(h(struct worker *worker)h]j)}(hstruct worker *workerh](j)}(hjh]hstruct}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1ubj)}(h h]h }(hjBhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1ubh)}(hhh]j)}(hworkerh]hworker}(hjShhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjUmodnameN classnameNj7j:)}j=]j@)}j3jsbc.process_scheduled_worksasbuh1hhj1ubj)}(h h]h }(hjshhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj1ubj)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj1ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj-ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM# ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM# ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM# hjhhubjC)}(hhh]h)}(hprocess scheduled worksh]hprocess scheduled works}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM# hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM# ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejЧjfjЧjgjhjiuh1jhhhjhNhNubjk)}(hXQ**Parameters** ``struct worker *worker`` self **Description** Process all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it. **Context** raw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.h](h)}(h**Parameters**h]j)}(hjڧh]h Parameters}(hjܧhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjاubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM' hjԧubjg)}(hhh]jl)}(h``struct worker *worker`` self h](jr)}(h``struct worker *worker``h]j)}(hjh]hstruct worker *worker}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM$ hjubj)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM$ hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM$ hjubah}(h]h ]h"]h$]h&]uh1jfhjԧubh)}(h**Description**h]j)}(hj4h]h Description}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM& hjԧubh)}(hProcess all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it.h]hProcess all scheduled works. Please note that the scheduled list may change while processing a work, so this function repeatedly fetches a work from the top and executes it.}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM% hjԧubh)}(h **Context**h]j)}(hj[h]hContext}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM) hjԧubh)}(hQraw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.h]hQraw_spin_lock_irq(pool->lock) which may be released and regrabbed multiple times.}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM* hjԧubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jworker_thread (C function)c.worker_threadhNtauh1jhjhhhNhNubj)}(hhh](j)}(h"int worker_thread (void *__worker)h]j)}(h!int worker_thread(void *__worker)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMH ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMH ubj)}(h worker_threadh]j)}(h worker_threadh]h worker_thread}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMH ubj|)}(h(void *__worker)h]j)}(hvoid *__workerh](j)}(hvoidh]hvoid}(hjݨhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj٨ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj٨ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj٨ubj)}(h__workerh]h__worker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj٨ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjըubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMH ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMH ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMH hjhhubjC)}(hhh]h)}(hthe worker thread functionh]hthe worker thread function}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMH hj-hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMH ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejHjfjHjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``void *__worker`` self **Description** The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread(). **Return** 0h](h)}(h**Parameters**h]j)}(hjRh]h Parameters}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chML hjLubjg)}(hhh]jl)}(h``void *__worker`` self h](jr)}(h``void *__worker``h]j)}(hjqh]hvoid *__worker}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMI hjkubj)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMI hjubah}(h]h ]h"]h$]h&]uh1jhjkubeh}(h]h ]h"]h$]h&]uh1jkhjhMI hjhubah}(h]h ]h"]h$]h&]uh1jfhjLubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMK hjLubh)}(hX=The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread().h]hX=The worker thread function. All workers belong to a worker_pool - either a per-cpu one or dynamic unbound one. These workers process all work items regardless of their specific target workqueue. The only exception is work items which belong to workqueues with a rescuer which will be explained in rescuer_thread().}(hj©hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMJ hjLubh)}(h **Return**h]j)}(hjөh]hReturn}(hjթhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjѩubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMP hjLubh)}(hj1{h]h0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMQ hjLubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jrescuer_thread (C function)c.rescuer_threadhNtauh1jhjhhhNhNubj)}(hhh](j)}(h$int rescuer_thread (void *__rescuer)h]j)}(h#int rescuer_thread(void *__rescuer)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hj&hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhj%hM ubj)}(hrescuer_threadh]j)}(hrescuer_threadh]hrescuer_thread}(hj8hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj4ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhj%hM ubj|)}(h(void *__rescuer)h]j)}(hvoid *__rescuerh](j)}(hvoidh]hvoid}(hjThhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjPubj)}(h h]h }(hjbhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjPubjU)}(hjuh]h*}(hjphhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjPubj)}(h __rescuerh]h __rescuer}(hj}hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjPubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjLubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhj%hM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhj%hM ubah}(h]j ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj%hM hj hhubjC)}(hhh]h)}(hthe rescuer thread functionh]hthe rescuer thread function}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jBhj hhhj%hM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``void *__rescuer`` self **Description** Workqueue rescuer thread function. There's one rescuer for each workqueue which has WQ_MEM_RECLAIM set. Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves. When such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed. This should happen rarely. **Return** 0h](h)}(h**Parameters**h]j)}(hjɪh]h Parameters}(hj˪hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjǪubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjêubjg)}(hhh]jl)}(h``void *__rescuer`` self h](jr)}(h``void *__rescuer``h]j)}(hjh]hvoid *__rescuer}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(hselfh]hself}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjߪubah}(h]h ]h"]h$]h&]uh1jfhjêubh)}(h**Description**h]j)}(hj#h]h Description}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjêubh)}(hhWorkqueue rescuer thread function. There's one rescuer for each workqueue which has WQ_MEM_RECLAIM set.h]hjWorkqueue rescuer thread function. There’s one rescuer for each workqueue which has WQ_MEM_RECLAIM set.}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjêubh)}(hX(Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves.h]hX(Regular work processing on a pool may block trying to create a new worker which uses GFP_KERNEL allocation which has slight chance of developing into deadlock if some works currently on the same queue need to be processed to satisfy the GFP_KERNEL allocation. This is the problem rescuer solves.}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjêubh)}(hWhen such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed.h]hWhen such condition is possible, the pool summons rescuers of all workqueues which have works queued on the pool and let them process those works so that forward progress can be guaranteed.}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjêubh)}(hThis should happen rarely.h]hThis should happen rarely.}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjêubh)}(h **Return**h]j)}(hjwh]hReturn}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjuubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjêubh)}(hj1{h]h0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjêubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j#check_flush_dependency (C function)c.check_flush_dependencyhNtauh1jhjhhhNhNubj)}(hhh](j)}(hsvoid check_flush_dependency (struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h]j)}(hrvoid check_flush_dependency(struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjʫhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjɫhMubj)}(hcheck_flush_dependencyh]j)}(hcheck_flush_dependencyh]hcheck_flush_dependency}(hjܫhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjثubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjɫhMubj|)}(hW(struct workqueue_struct *target_wq, struct work_struct *target_work, bool from_cancel)h](j)}(h"struct workqueue_struct *target_wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jޫsbc.check_flush_dependencyasbuh1hhjubj)}(h h]h }(hj6hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjDhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(h target_wqh]h target_wq}(hjQhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct work_struct *target_workh](j)}(hjh]hstruct}(hjjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjfubj)}(h h]h }(hjwhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjfubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j2c.check_flush_dependencyasbuh1hhjfubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjfubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjfubj)}(h target_workh]h target_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjfubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hbool from_cancelh](j)}(hj*h]hbool}(hjڬhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj֬ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj֬ubj)}(h from_cancelh]h from_cancel}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj֬ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjɫhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjɫhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjɫhMhjhhubjC)}(hhh]h)}(h!check for flush dependency sanityh]h!check for flush dependency sanity}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjɫhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej7jfj7jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *target_wq`` workqueue being flushed ``struct work_struct *target_work`` work item being flushed (NULL for workqueue flushes) ``bool from_cancel`` are we called from the work cancel path **Description** ``current`` is trying to flush the whole **target_wq** or **target_work** on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if **target_wq** doesn't have ``WQ_MEM_RECLAIM`` and verify that ``current`` is not reclaiming memory or running on a workqueue which doesn't have ``WQ_MEM_RECLAIM`` as that can break forward- progress guarantee leading to a deadlock.h](h)}(h**Parameters**h]j)}(hjAh]h Parameters}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj;ubjg)}(hhh](jl)}(h?``struct workqueue_struct *target_wq`` workqueue being flushed h](jr)}(h&``struct workqueue_struct *target_wq``h]j)}(hj`h]h"struct workqueue_struct *target_wq}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjZubj)}(hhh]h)}(hworkqueue being flushedh]hworkqueue being flushed}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuhMhjvubah}(h]h ]h"]h$]h&]uh1jhjZubeh}(h]h ]h"]h$]h&]uh1jkhjuhMhjWubjl)}(hY``struct work_struct *target_work`` work item being flushed (NULL for workqueue flushes) h](jr)}(h#``struct work_struct *target_work``h]j)}(hjh]hstruct work_struct *target_work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(h4work item being flushed (NULL for workqueue flushes)h]h4work item being flushed (NULL for workqueue flushes)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjWubjl)}(h=``bool from_cancel`` are we called from the work cancel path h](jr)}(h``bool from_cancel``h]j)}(hjҭh]hbool from_cancel}(hjԭhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjЭubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj̭ubj)}(hhh]h)}(h'are we called from the work cancel pathh]h'are we called from the work cancel path}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhj̭ubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjWubeh}(h]h ]h"]h$]h&]uh1jfhj;ubh)}(h**Description**h]j)}(hj h]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj;ubh)}(hX``current`` is trying to flush the whole **target_wq** or **target_work** on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if **target_wq** doesn't have ``WQ_MEM_RECLAIM`` and verify that ``current`` is not reclaiming memory or running on a workqueue which doesn't have ``WQ_MEM_RECLAIM`` as that can break forward- progress guarantee leading to a deadlock.h](j)}(h ``current``h]hcurrent}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubh is trying to flush the whole }(hj#hhhNhNubj)}(h **target_wq**h]h target_wq}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubh or }(hj#hhhNhNubj)}(h**target_work**h]h target_work}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubh on it. If this is not the cancel path (which implies work being flushed is either already running, or will not be at all), check if }(hj#hhhNhNubj)}(h **target_wq**h]h target_wq}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubh doesn’t have }(hj#hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubh and verify that }(hj#hhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubhI is not reclaiming memory or running on a workqueue which doesn’t have }(hj#hhhNhNubj)}(h``WQ_MEM_RECLAIM``h]hWQ_MEM_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubhE as that can break forward- progress guarantee leading to a deadlock.}(hj#hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj;ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jinsert_wq_barrier (C function)c.insert_wq_barrierhNtauh1jhjhhhNhNubj)}(hhh](j)}(hvoid insert_wq_barrier (struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h]j)}(h~void insert_wq_barrier(struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h](j)}(hvoidh]hvoid}(hj̮hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjȮhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjۮhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjȮhhhjڮhMubj)}(hinsert_wq_barrierh]j)}(hinsert_wq_barrierh]hinsert_wq_barrier}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjȮhhhjڮhMubj|)}(hh(struct pool_workqueue *pwq, struct wq_barrier *barr, struct work_struct *target, struct worker *worker)h](j)}(hstruct pool_workqueue *pwqh](j)}(hjh]hstruct}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hpool_workqueueh]hpool_workqueue}(hj'hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj$ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj)modnameN classnameNj7j:)}j=]j@)}j3jsbc.insert_wq_barrierasbuh1hhjubj)}(h h]h }(hjGhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjUhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hpwqh]hpwq}(hjbhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct wq_barrier *barrh](j)}(hjh]hstruct}(hj{hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjwubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjwubh)}(hhh]j)}(h wq_barrierh]h wq_barrier}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]jCc.insert_wq_barrierasbuh1hhjwubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjwubjU)}(hjuh]h*}(hjůhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjwubj)}(hbarrh]hbarr}(hjүhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjwubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct work_struct *targeth](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj modnameN classnameNj7j:)}j=]jCc.insert_wq_barrierasbuh1hhjubj)}(h h]h }(hj'hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj5hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(htargeth]htarget}(hjBhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct worker *workerh](j)}(hjh]hstruct}(hj[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjWubj)}(h h]h }(hjhhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjWubh)}(hhh]j)}(hworkerh]hworker}(hjyhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjvubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj{modnameN classnameNj7j:)}j=]jCc.insert_wq_barrierasbuh1hhjWubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjWubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjWubj)}(hworkerh]hworker}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjWubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjȮhhhjڮhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjĮhhhjڮhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjڮhMhjhhubjC)}(hhh]h)}(hinsert a barrier workh]hinsert a barrier work}(hjܰhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjٰhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjڮhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct pool_workqueue *pwq`` pwq to insert barrier into ``struct wq_barrier *barr`` wq_barrier to insert ``struct work_struct *target`` target work to attach **barr** to ``struct worker *worker`` worker currently executing **target**, NULL if **target** is not executing **Description** **barr** is linked to **target** such that **barr** is completed only after **target** finishes execution. Please note that the ordering guarantee is observed only with respect to **target** and on the local cpu. Currently, a queued barrier can't be canceled. This is because try_to_grab_pending() can't determine whether the work to be grabbed is at the head of the queue and thus can't clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set. Note that when **worker** is non-NULL, **target** may be modified underneath us, so we can't reliably determine pwq from **target**. **Context** raw_spin_lock_irq(pool->lock).h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubjg)}(hhh](jl)}(h:``struct pool_workqueue *pwq`` pwq to insert barrier into h](jr)}(h``struct pool_workqueue *pwq``h]j)}(hjh]hstruct pool_workqueue *pwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hpwq to insert barrier intoh]hpwq to insert barrier into}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj2hMhj3ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj2hMhjubjl)}(h1``struct wq_barrier *barr`` wq_barrier to insert h](jr)}(h``struct wq_barrier *barr``h]j)}(hjVh]hstruct wq_barrier *barr}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjPubj)}(hhh]h)}(hwq_barrier to inserth]hwq_barrier to insert}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhjkhM hjlubah}(h]h ]h"]h$]h&]uh1jhjPubeh}(h]h ]h"]h$]h&]uh1jkhjkhM hjubjl)}(hA``struct work_struct *target`` target work to attach **barr** to h](jr)}(h``struct work_struct *target``h]j)}(hjh]hstruct work_struct *target}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(h!target work to attach **barr** toh](htarget work to attach }(hjhhhNhNubj)}(h**barr**h]hbarr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjubjl)}(he``struct worker *worker`` worker currently executing **target**, NULL if **target** is not executing h](jr)}(h``struct worker *worker``h]j)}(hjڱh]hstruct worker *worker}(hjܱhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjرubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjԱubj)}(hhh]h)}(hJworker currently executing **target**, NULL if **target** is not executingh](hworker currently executing }(hjhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh , NULL if }(hjhhhNhNubj)}(h **target**h]htarget}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is not executing}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjԱubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj9h]h Description}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(h**barr** is linked to **target** such that **barr** is completed only after **target** finishes execution. Please note that the ordering guarantee is observed only with respect to **target** and on the local cpu.h](j)}(h**barr**h]hbarr}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh is linked to }(hjOhhhNhNubj)}(h **target**h]htarget}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh such that }(hjOhhhNhNubj)}(h**barr**h]hbarr}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh is completed only after }(hjOhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh_ finishes execution. Please note that the ordering guarantee is observed only with respect to }(hjOhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh and on the local cpu.}(hjOhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(hXCurrently, a queued barrier can't be canceled. This is because try_to_grab_pending() can't determine whether the work to be grabbed is at the head of the queue and thus can't clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set.h]hX%Currently, a queued barrier can’t be canceled. This is because try_to_grab_pending() can’t determine whether the work to be grabbed is at the head of the queue and thus can’t clear LINKED flag of the previous work while there must be a valid next work after a work with LINKED flag set.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hNote that when **worker** is non-NULL, **target** may be modified underneath us, so we can't reliably determine pwq from **target**.h](hNote that when }(hjòhhhNhNubj)}(h **worker**h]hworker}(hj˲hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjòubh is non-NULL, }(hjòhhhNhNubj)}(h **target**h]htarget}(hjݲhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjòubhJ may be modified underneath us, so we can’t reliably determine pwq from }(hjòhhhNhNubj)}(h **target**h]htarget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjòubh.}(hjòhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h **Context**h]j)}(hj h]hContext}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hraw_spin_lock_irq(pool->lock).h]hraw_spin_lock_irq(pool->lock).}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&flush_workqueue_prep_pwqs (C function)c.flush_workqueue_prep_pwqshNtauh1jhjhhhNhNubj)}(hhh](j)}(h]bool flush_workqueue_prep_pwqs (struct workqueue_struct *wq, int flush_color, int work_color)h]j)}(h\bool flush_workqueue_prep_pwqs(struct workqueue_struct *wq, int flush_color, int work_color)h](j)}(hj*h]hbool}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMTubj)}(h h]h }(hj]hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjKhhhj\hMTubj)}(hflush_workqueue_prep_pwqsh]j)}(hflush_workqueue_prep_pwqsh]hflush_workqueue_prep_pwqs}(hjohhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjkubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjKhhhj\hMTubj|)}(h>(struct workqueue_struct *wq, int flush_color, int work_color)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jqsbc.flush_workqueue_prep_pwqsasbuh1hhjubj)}(h h]h }(hjɳhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj׳hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint flush_colorh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(h flush_colorh]h flush_color}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint work_colorh](j)}(hinth]hint}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubj)}(h h]h }(hj@hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj.ubj)}(h work_colorh]h work_color}(hjNhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj.ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjKhhhj\hMTubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjGhhhj\hMTubah}(h]jBah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj\hMThjDhhubjC)}(hhh]h)}(h#prepare pwqs for workqueue flushingh]h#prepare pwqs for workqueue flushing}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMThjuhhubah}(h]h ]h"]h$]h&]uh1jBhjDhhhj\hMTubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hXa**Parameters** ``struct workqueue_struct *wq`` workqueue being flushed ``int flush_color`` new flush color, < 0 for no-op ``int work_color`` new work color, < 0 for no-op **Description** Prepare pwqs for workqueue flushing. If **flush_color** is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color's stay at -1 and ``false`` is returned. If any pwq has in flight commands, its pwq->flush_color is set to **flush_color**, **wq->nr_pwqs_to_flush** is updated accordingly, pwq wakeup logic is armed and ``true`` is returned. The caller should have initialized **wq->first_flusher** prior to calling this function with non-negative **flush_color**. If **flush_color** is negative, no flush color update is done and ``false`` is returned. If **work_color** is non-negative, all pwqs should have the same work_color which is previous to **work_color** and all will be advanced to **work_color**. **Context** mutex_lock(wq->mutex). **Return** ``true`` if **flush_color** >= 0 and there's something to flush. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMXhjubjg)}(hhh](jl)}(h8``struct workqueue_struct *wq`` workqueue being flushed h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMUhjubj)}(hhh]h)}(hworkqueue being flushedh]hworkqueue being flushed}(hjҴhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjδhMUhjϴubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjδhMUhjubjl)}(h3``int flush_color`` new flush color, < 0 for no-op h](jr)}(h``int flush_color``h]j)}(hjh]hint flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMVhjubj)}(hhh]h)}(hnew flush color, < 0 for no-oph]hnew flush color, < 0 for no-op}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMVhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMVhjubjl)}(h1``int work_color`` new work color, < 0 for no-op h](jr)}(h``int work_color``h]j)}(hj+h]hint work_color}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMWhj%ubj)}(hhh]h)}(hnew work color, < 0 for no-oph]hnew work color, < 0 for no-op}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj@hMWhjAubah}(h]h ]h"]h$]h&]uh1jhj%ubeh}(h]h ]h"]h$]h&]uh1jkhj@hMWhjubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjfh]h Description}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjdubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMYhjubh)}(h$Prepare pwqs for workqueue flushing.h]h$Prepare pwqs for workqueue flushing.}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMXhjubh)}(hXyIf **flush_color** is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color's stay at -1 and ``false`` is returned. If any pwq has in flight commands, its pwq->flush_color is set to **flush_color**, **wq->nr_pwqs_to_flush** is updated accordingly, pwq wakeup logic is armed and ``true`` is returned.h](hIf }(hjhhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is non-negative, flush_color on all pwqs should be -1. If no pwq has in-flight commands at the specified color, all pwq->flush_color’s stay at -1 and }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhQ is returned. If any pwq has in flight commands, its pwq->flush_color is set to }(hjhhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h**wq->nr_pwqs_to_flush**h]hwq->nr_pwqs_to_flush}(hjɵhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh7 is updated accordingly, pwq wakeup logic is armed and }(hjhhhNhNubj)}(h``true``h]htrue}(hj۵hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is returned.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMZhjubh)}(hThe caller should have initialized **wq->first_flusher** prior to calling this function with non-negative **flush_color**. If **flush_color** is negative, no flush color update is done and ``false`` is returned.h](h#The caller should have initialized }(hjhhhNhNubj)}(h**wq->first_flusher**h]hwq->first_flusher}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh2 prior to calling this function with non-negative }(hjhhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. If }(hjhhhNhNubj)}(h**flush_color**h]h flush_color}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh0 is negative, no flush color update is done and }(hjhhhNhNubj)}(h ``false``h]hfalse}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is returned.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMahjubh)}(hIf **work_color** is non-negative, all pwqs should have the same work_color which is previous to **work_color** and all will be advanced to **work_color**.h](hIf }(hjKhhhNhNubj)}(h**work_color**h]h work_color}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubhP is non-negative, all pwqs should have the same work_color which is previous to }(hjKhhhNhNubj)}(h**work_color**h]h work_color}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubh and all will be advanced to }(hjKhhhNhNubj)}(h**work_color**h]h work_color}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubh.}(hjKhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMfhjubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMjhjubh)}(hmutex_lock(wq->mutex).h]hmutex_lock(wq->mutex).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMkhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMmhjubh)}(hV``true`` if **flush_color** >= 0 and there's something to flush. ``false`` otherwise.h](j)}(h``true``h]htrue}(hjӶhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj϶ubh if }(hj϶hhhNhNubj)}(h**flush_color**h]h flush_color}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj϶ubh) >= 0 and there’s something to flush. }(hj϶hhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj϶ubh otherwise.}(hj϶hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMnhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j__flush_workqueue (C function)c.__flush_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void __flush_workqueue (struct workqueue_struct *wq)h]j)}(h3void __flush_workqueue(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj,hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj?hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj,hhhj>hMubj)}(h__flush_workqueueh]j)}(h__flush_workqueueh]h__flush_workqueue}(hjQhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjMubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj,hhhj>hMubj|)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjmhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjiubj)}(h h]h }(hjzhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjiubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jSsbc.__flush_workqueueasbuh1hhjiubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjiubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjiubj)}(hwqh]hwq}(hjƷhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjiubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjeubah}(h]h ]h"]h$]h&]jjuh1j{hj,hhhj>hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj(hhhj>hMubah}(h]j#ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj>hMhj%hhubjC)}(hhh]h)}(h5ensure that any scheduled work has run to completion.h]h5ensure that any scheduled work has run to completion.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhj%hhhj>hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct workqueue_struct *wq`` workqueue to flush **Description** This function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubjg)}(hhh]jl)}(h3``struct workqueue_struct *wq`` workqueue to flush h](jr)}(h``struct workqueue_struct *wq``h]j)}(hj1h]hstruct workqueue_struct *wq}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj+ubj)}(hhh]h)}(hworkqueue to flushh]hworkqueue to flush}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhMhjGubah}(h]h ]h"]h$]h&]uh1jhj+ubeh}(h]h ]h"]h$]h&]uh1jkhjFhMhj(ubah}(h]h ]h"]h$]h&]uh1jfhj ubh)}(h**Description**h]j)}(hjlh]h Description}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(hThis function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.h]hThis function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdrain_workqueue (C function)c.drain_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h2void drain_workqueue (struct workqueue_struct *wq)h]j)}(h1void drain_workqueue(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMfubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMfubj)}(hdrain_workqueueh]j)}(hdrain_workqueueh]hdrain_workqueue}(hjҸhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjθubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMfubj|)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jԸsbc.drain_workqueueasbuh1hhjubj)}(h h]h }(hj,hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj:hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjGhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMfubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMfubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMfhjhhubjC)}(hhh]h)}(hdrain a workqueueh]hdrain a workqueue}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMfhjnhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMfubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` workqueue to drain **Description** Wait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on **wq** can queue further work items on it. **wq** is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMjhjubjg)}(hhh]jl)}(h3``struct workqueue_struct *wq`` workqueue to drain h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMghjubj)}(hhh]h)}(hworkqueue to drainh]hworkqueue to drain}(hj˹hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjǹhMghjȹubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjǹhMghjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMihjubh)}(hXzWait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on **wq** can queue further work items on it. **wq** is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.h](hWait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on }(hjhhhNhNubj)}(h**wq**h]hwq}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh& can queue further work items on it. }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jflush_work (C function) c.flush_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h*bool flush_work (struct work_struct *work)h]j)}(h)bool flush_work(struct work_struct *work)h](j)}(hj*h]hbool}(hjVhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjdhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjRhhhjchM ubj)}(h flush_workh]j)}(h flush_workh]h flush_work}(hjvhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjrubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjRhhhjchM ubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jxsb c.flush_workasbuh1hhjubj)}(h h]h }(hjкhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj޺hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjRhhhjchM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjNhhhjchM ubah}(h]jIah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjchM hjKhhubjC)}(hhh]h)}(h>wait for a work to finish executing the last queueing instanceh]h>wait for a work to finish executing the last queueing instance}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jBhjKhhhjchM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej-jfj-jgjhjiuh1jhhhjhNhNubjk)}(hXL**Parameters** ``struct work_struct *work`` the work to flush **Description** Wait until **work** has finished execution. **work** is guaranteed to be idle on return if it hasn't been requeued since flush started. **Return** ``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hj7h]h Parameters}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj5ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj1ubjg)}(hhh]jl)}(h/``struct work_struct *work`` the work to flush h](jr)}(h``struct work_struct *work``h]j)}(hjVh]hstruct work_struct *work}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjPubj)}(hhh]h)}(hthe work to flushh]hthe work to flush}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhjkhM hjlubah}(h]h ]h"]h$]h&]uh1jhjPubeh}(h]h ]h"]h$]h&]uh1jkhjkhM hjMubah}(h]h ]h"]h$]h&]uh1jfhj1ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj1ubh)}(hWait until **work** has finished execution. **work** is guaranteed to be idle on return if it hasn't been requeued since flush started.h](h Wait until }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh has finished execution. }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhU is guaranteed to be idle on return if it hasn’t been requeued since flush started.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj1ubh)}(h **Return**h]j)}(hjܻh]hReturn}(hj޻hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjڻubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj1ubh)}(hc``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh: if flush_work() waited for the work to finish execution, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if it was already idle.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj1ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jflush_delayed_work (C function)c.flush_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4bool flush_delayed_work (struct delayed_work *dwork)h]j)}(h3bool flush_delayed_work(struct delayed_work *dwork)h](j)}(hj*h]hbool}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjOhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj=hhhjNhMubj)}(hflush_delayed_workh]j)}(hflush_delayed_workh]hflush_delayed_work}(hjahhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj]ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj=hhhjNhMubj|)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hj}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjyubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjyubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jcsbc.flush_delayed_workasbuh1hhjyubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjyubjU)}(hjuh]h*}(hjɼhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjyubj)}(hdworkh]hdwork}(hjּhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjyubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjuubah}(h]h ]h"]h$]h&]jjuh1j{hj=hhhjNhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj9hhhjNhMubah}(h]j4ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjNhMhj6hhubjC)}(hhh]h)}(h6wait for a dwork to finish executing the last queueingh]h6wait for a dwork to finish executing the last queueing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhj6hhhjNhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hXz**Parameters** ``struct delayed_work *dwork`` the delayed work to flush **Description** Delayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of **dwork**. **Return** ``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hj"h]h Parameters}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubjg)}(hhh]jl)}(h9``struct delayed_work *dwork`` the delayed work to flush h](jr)}(h``struct delayed_work *dwork``h]j)}(hjAh]hstruct delayed_work *dwork}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj;ubj)}(hhh]h)}(hthe delayed work to flushh]hthe delayed work to flush}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjVhMhjWubah}(h]h ]h"]h$]h&]uh1jhj;ubeh}(h]h ]h"]h$]h&]uh1jkhjVhMhj8ubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj|h]h Description}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjzubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hDelayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of **dwork**.h](hDelayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM"hjubh)}(hc``true`` if flush_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hjϽhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj˽ubh: if flush_work() waited for the work to finish execution, }(hj˽hhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj˽ubh if it was already idle.}(hj˽hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM#hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jflush_rcu_work (C function)c.flush_rcu_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h,bool flush_rcu_work (struct rcu_work *rwork)h]j)}(h+bool flush_rcu_work(struct rcu_work *rwork)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM2ubj)}(h h]h }(hj(hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhj'hM2ubj)}(hflush_rcu_workh]j)}(hflush_rcu_workh]hflush_rcu_work}(hj:hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj6ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhj'hM2ubj|)}(h(struct rcu_work *rwork)h]j)}(hstruct rcu_work *rworkh](j)}(hjh]hstruct}(hjVhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRubj)}(h h]h }(hjchhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjRubh)}(hhh]j)}(hrcu_workh]hrcu_work}(hjthhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjqubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjvmodnameN classnameNj7j:)}j=]j@)}j3j<sbc.flush_rcu_workasbuh1hhjRubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjRubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjRubj)}(hrworkh]hrwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjRubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjNubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhj'hM2ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhj'hM2ubah}(h]j ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj'hM2hjhhubjC)}(hhh]h)}(h6wait for a rwork to finish executing the last queueingh]h6wait for a rwork to finish executing the last queueing}(hjپhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM2hj־hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhj'hM2ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct rcu_work *rwork`` the rcu work to flush **Return** ``true`` if flush_rcu_work() waited for the work to finish execution, ``false`` if it was already idle.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM6hjubjg)}(hhh]jl)}(h1``struct rcu_work *rwork`` the rcu work to flush h](jr)}(h``struct rcu_work *rwork``h]j)}(hjh]hstruct rcu_work *rwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM3hjubj)}(hhh]h)}(hthe rcu work to flushh]hthe rcu work to flush}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/hM3hj0ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj/hM3hjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h **Return**h]j)}(hjUh]hReturn}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM5hjubh)}(hg``true`` if flush_rcu_work() waited for the work to finish execution, ``false`` if it was already idle.h](j)}(h``true``h]htrue}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubh> if flush_rcu_work() waited for the work to finish execution, }(hjkhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubh if it was already idle.}(hjkhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM5hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jcancel_work_sync (C function)c.cancel_work_synchNtauh1jhjhhhNhNubj)}(hhh](j)}(h0bool cancel_work_sync (struct work_struct *work)h]j)}(h/bool cancel_work_sync(struct work_struct *work)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjȿhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjǿhMubj)}(hcancel_work_synch]j)}(hcancel_work_synch]hcancel_work_sync}(hjڿhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjֿubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjǿhMubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jܿsbc.cancel_work_syncasbuh1hhjubj)}(h h]h }(hj4hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjBhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hworkh]hwork}(hjOhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjǿhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjǿhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjǿhMhjhhubjC)}(hhh]h)}(h'cancel a work and wait for it to finishh]h'cancel a work and wait for it to finish}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjvhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjǿhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct work_struct *work`` the work to cancel **Description** Cancel **work** and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, **work** is guaranteed to be not pending or executing on any CPU as long as there aren't racing enqueues. cancel_work_sync(:c:type:`delayed_work->work `) must not be used for delayed_work's. Use cancel_delayed_work_sync() instead. Must be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(h0``struct work_struct *work`` the work to cancel h](jr)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hthe work to cancelh]hthe work to cancel}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hXCancel **work** and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, **work** is guaranteed to be not pending or executing on any CPU as long as there aren't racing enqueues.h](hCancel }(hj hhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, }(hj hhhNhNubj)}(h**work**h]hwork}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhc is guaranteed to be not pending or executing on any CPU as long as there aren’t racing enqueues.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hcancel_work_sync(:c:type:`delayed_work->work `) must not be used for delayed_work's. Use cancel_delayed_work_sync() instead.h](hcancel_work_sync(}(hj>hhhNhNubh)}(h+:c:type:`delayed_work->work `h]j)}(hjHh]hdelayed_work->work}(hjJhhhNhNubah}(h]h ](xrefj_c-typeeh"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]refdoccore-api/workqueue refdomainj_reftypetype refexplicitrefwarnj7j:)}j=]sb reftarget delayed_workuh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj>ubhP) must not be used for delayed_work’s. Use cancel_delayed_work_sync() instead.}(hj>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjkhMhjubh)}(hMust be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue.h](h+Must be called from a sleepable context if }(hjvhhhNhNubj)}(h**work**h]hwork}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubhl was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if }(hjvhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubh# was last queued on a BH workqueue.}(hjvhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h>Returns ``true`` if **work** was pending, ``false`` otherwise.h](hReturns }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j cancel_delayed_work (C function)c.cancel_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h5bool cancel_delayed_work (struct delayed_work *dwork)h]j)}(h4bool cancel_delayed_work(struct delayed_work *dwork)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj hhhjhMubj)}(hcancel_delayed_workh]j)}(hcancel_delayed_workh]hcancel_delayed_work}(hj.hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj*ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj hhhjhMubj|)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hjJhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjFubj)}(h h]h }(hjWhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjFubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjeubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjjmodnameN classnameNj7j:)}j=]j@)}j3j0sbc.cancel_delayed_workasbuh1hhjFubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjFubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjFubj)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjFubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjBubah}(h]h ]h"]h$]h&]jjuh1j{hj hhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjhhubjC)}(hhh]h)}(hcancel a delayed workh]hcancel a delayed work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct delayed_work *dwork`` delayed_work to cancel **Description** Kill off a pending delayed_work. **Return** ``true`` if **dwork** was pending and canceled; ``false`` if it wasn't pending. **Note** The work callback function may still be running on return, unless it returns ``true`` and the work doesn't re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it. This function is safe to call from any context including IRQ handler.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(h6``struct delayed_work *dwork`` delayed_work to cancel h](jr)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hdelayed_work to cancelh]hdelayed_work to cancel}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj#hMhj$ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj#hMhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjIh]h Description}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjGubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h Kill off a pending delayed_work.h]h Kill off a pending delayed_work.}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjph]hReturn}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hO``true`` if **dwork** was pending and canceled; ``false`` if it wasn't pending.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending and canceled; }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if it wasn’t pending.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h**Note**h]j)}(hjh]hNote}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hThe work callback function may still be running on return, unless it returns ``true`` and the work doesn't re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.h](hMThe work callback function may still be running on return, unless it returns }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhi and the work doesn’t re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hEThis function is safe to call from any context including IRQ handler.h]hEThis function is safe to call from any context including IRQ handler.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%cancel_delayed_work_sync (C function)c.cancel_delayed_work_synchNtauh1jhjhhhNhNubj)}(hhh](j)}(h:bool cancel_delayed_work_sync (struct delayed_work *dwork)h]j)}(h9bool cancel_delayed_work_sync(struct delayed_work *dwork)h](j)}(hj*h]hbool}(hj/hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj+hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj=hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj+hhhj<hMubj)}(hcancel_delayed_work_synch]j)}(hcancel_delayed_work_synch]hcancel_delayed_work_sync}(hjOhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjKubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj+hhhj<hMubj|)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hjkhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjgubj)}(h h]h }(hjxhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjgubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jQsbc.cancel_delayed_work_syncasbuh1hhjgubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjgubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjgubj)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjgubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjcubah}(h]h ]h"]h$]h&]jjuh1j{hj+hhhj<hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj'hhhj<hMubah}(h]j"ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj<hMhj$hhubjC)}(hhh]h)}(h/cancel a delayed work and wait for it to finishh]h/cancel a delayed work and wait for it to finish}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhj$hhhj<hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct delayed_work *dwork`` the delayed work cancel **Description** This is cancel_work_sync() for delayed works. **Return** ``true`` if **dwork** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubjg)}(hhh]jl)}(h7``struct delayed_work *dwork`` the delayed work cancel h](jr)}(h``struct delayed_work *dwork``h]j)}(hj/h]hstruct delayed_work *dwork}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj)ubj)}(hhh]h)}(hthe delayed work cancelh]hthe delayed work cancel}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjDhMhjEubah}(h]h ]h"]h$]h&]uh1jhj)ubeh}(h]h ]h"]h$]h&]uh1jkhjDhMhj&ubah}(h]h ]h"]h$]h&]uh1jfhj ubh)}(h**Description**h]j)}(hjjh]h Description}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(h-This is cancel_work_sync() for delayed works.h]h-This is cancel_work_sync() for delayed works.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(h7``true`` if **dwork** was pending, ``false`` otherwise.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h **dwork**h]hdwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdisable_work (C function)c.disable_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h,bool disable_work (struct work_struct *work)h]j)}(h+bool disable_work(struct work_struct *work)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubj)}(h disable_workh]j)}(h disable_workh]h disable_work}(hj(hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj$ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@ubj)}(h h]h }(hjQhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj@ubh)}(hhh]j)}(h work_structh]h work_struct}(hjbhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj_ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjdmodnameN classnameNj7j:)}j=]j@)}j3j*sbc.disable_workasbuh1hhj@ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj@ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj@ubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj@ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj<ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjhhubjC)}(hhh]h)}(hDisable and cancel a work itemh]hDisable and cancel a work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct work_struct *work`` work item to disable **Description** Disable **work** by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue **work** will fail and return ``false``. The maximum supported disable depth is 2 to the power of ``WORK_OFFQ_DISABLE_BITS``, currently 65536. Can be called from any context. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(h2``struct work_struct *work`` work item to disable h](jr)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hwork item to disableh]hwork item to disable}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjCh]h Description}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hX$Disable **work** by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue **work** will fail and return ``false``. The maximum supported disable depth is 2 to the power of ``WORK_OFFQ_DISABLE_BITS``, currently 65536.h](hDisable }(hjYhhhNhNubj)}(h**work**h]hwork}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh by incrementing its disable count and cancel it if currently pending. As long as the disable count is non-zero, any attempt to queue }(hjYhhhNhNubj)}(h**work**h]hwork}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh will fail and return }(hjYhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh;. The maximum supported disable depth is 2 to the power of }(hjYhhhNhNubj)}(h``WORK_OFFQ_DISABLE_BITS``h]hWORK_OFFQ_DISABLE_BITS}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh, currently 65536.}(hjYhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h^Can be called from any context. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h(Can be called from any context. Returns }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdisable_work_sync (C function)c.disable_work_synchNtauh1jhjhhhNhNubj)}(hhh](j)}(h1bool disable_work_sync (struct work_struct *work)h]j)}(h0bool disable_work_sync(struct work_struct *work)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj#hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhj"hMubj)}(hdisable_work_synch]j)}(hdisable_work_synch]hdisable_work_sync}(hj5hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj1ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhj"hMubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMubj)}(h h]h }(hj^hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjMubh)}(hhh]j)}(h work_structh]h work_struct}(hjohhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjqmodnameN classnameNj7j:)}j=]j@)}j3j7sbc.disable_work_syncasbuh1hhjMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjMubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjMubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjMubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjIubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhj"hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj hhhj"hMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj"hMhj hhubjC)}(hhh]h)}(h%Disable, cancel and drain a work itemh]h%Disable, cancel and drain a work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhj hhhj"hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct work_struct *work`` work item to disable **Description** Similar to disable_work() but also wait for **work** to finish if currently executing. Must be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue. Returns ``true`` if **work** was pending, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(h2``struct work_struct *work`` work item to disable h](jr)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hwork item to disableh]hwork item to disable}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj*hMhj+ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj*hMhj ubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjPh]h Description}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hVSimilar to disable_work() but also wait for **work** to finish if currently executing.h](h,Similar to disable_work() but also wait for }(hjfhhhNhNubj)}(h**work**h]hwork}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfubh" to finish if currently executing.}(hjfhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hMust be called from a sleepable context if **work** was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if **work** was last queued on a BH workqueue.h](h+Must be called from a sleepable context if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhl was last queued on a non-BH workqueue. Can also be called from non-hardirq atomic contexts including BH if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh# was last queued on a BH workqueue.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h>Returns ``true`` if **work** was pending, ``false`` otherwise.h](hReturns }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh was pending, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jenable_work (C function) c.enable_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h+bool enable_work (struct work_struct *work)h]j)}(h*bool enable_work(struct work_struct *work)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj-hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhj,hMubj)}(h enable_workh]j)}(h enable_workh]h enable_work}(hj?hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj;ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhj,hMubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hj[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjWubj)}(h h]h }(hjhhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjWubh)}(hhh]j)}(h work_structh]h work_struct}(hjyhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjvubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj{modnameN classnameNj7j:)}j=]j@)}j3jAsb c.enable_workasbuh1hhjWubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjWubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjWubj)}(hworkh]hwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjWubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjSubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhj,hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhj,hMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj,hMhjhhubjC)}(hhh]h)}(hEnable a work itemh]hEnable a work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhj,hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX8**Parameters** ``struct work_struct *work`` work item to enable **Description** Undo disable_work[_sync]() by decrementing **work**'s disable count. **work** can only be queued if its disable count is 0. Can be called from any context. Returns ``true`` if the disable count reached 0. Otherwise, ``false``.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(h1``struct work_struct *work`` work item to enable h](jr)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hwork item to enableh]hwork item to enable}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hMhj5ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj4hMhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjZh]h Description}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h{Undo disable_work[_sync]() by decrementing **work**'s disable count. **work** can only be queued if its disable count is 0.h](h+Undo disable_work[_sync]() by decrementing }(hjphhhNhNubj)}(h**work**h]hwork}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh’s disable count. }(hjphhhNhNubj)}(h**work**h]hwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubh. can only be queued if its disable count is 0.}(hjphhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hfCan be called from any context. Returns ``true`` if the disable count reached 0. Otherwise, ``false``.h](h(Can be called from any context. Returns }(hjhhhNhNubj)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, if the disable count reached 0. Otherwise, }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!disable_delayed_work (C function)c.disable_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h6bool disable_delayed_work (struct delayed_work *dwork)h]j)}(h5bool disable_delayed_work(struct delayed_work *dwork)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM ubj)}(hdisable_delayed_workh]j)}(hdisable_delayed_workh]hdisable_delayed_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM ubj|)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubj)}(h h]h }(hj?hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj.ubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjPhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjRmodnameN classnameNj7j:)}j=]j@)}j3jsbc.disable_delayed_workasbuh1hhj.ubj)}(h h]h }(hjphhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj.ubjU)}(hjuh]h*}(hj~hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj.ubj)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj.ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj*ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM hjhhubjC)}(hhh]h)}(h&Disable and cancel a delayed work itemh]h&Disable and cancel a delayed work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to disable **Description** disable_work() for delayed work items.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubjg)}(hhh]jl)}(h<``struct delayed_work *dwork`` delayed work item to disable h](jr)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(hdelayed work item to disableh]hdelayed work item to disable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hM hj ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj hM hjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj1h]h Description}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(h&disable_work() for delayed work items.h]h&disable_work() for delayed work items.}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&disable_delayed_work_sync (C function)c.disable_delayed_work_synchNtauh1jhjhhhNhNubj)}(hhh](j)}(h;bool disable_delayed_work_sync (struct delayed_work *dwork)h]j)}(h:bool disable_delayed_work_sync(struct delayed_work *dwork)h](j)}(hj*h]hbool}(hjvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjrhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjrhhhjhMubj)}(hdisable_delayed_work_synch]j)}(hdisable_delayed_work_synch]hdisable_delayed_work_sync}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjrhhhjhMubj|)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.disable_delayed_work_syncasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hdworkh]hdwork}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjrhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjnhhhjhMubah}(h]jiah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjkhhubjC)}(hhh]h)}(h-Disable, cancel and drain a delayed work itemh]h-Disable, cancel and drain a delayed work item}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj2hhubah}(h]h ]h"]h$]h&]uh1jBhjkhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejMjfjMjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to disable **Description** disable_work_sync() for delayed work items.h](h)}(h**Parameters**h]j)}(hjWh]h Parameters}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjUubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjQubjg)}(hhh]jl)}(h<``struct delayed_work *dwork`` delayed work item to disable h](jr)}(h``struct delayed_work *dwork``h]j)}(hjvh]hstruct delayed_work *dwork}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjpubj)}(hhh]h)}(hdelayed work item to disableh]hdelayed work item to disable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjpubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjmubah}(h]h ]h"]h$]h&]uh1jfhjQubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjQubh)}(h+disable_work_sync() for delayed work items.h]h+disable_work_sync() for delayed work items.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjQubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j enable_delayed_work (C function)c.enable_delayed_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h5bool enable_delayed_work (struct delayed_work *dwork)h]j)}(h4bool enable_delayed_work(struct delayed_work *dwork)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM#ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM#ubj)}(henable_delayed_workh]j)}(henable_delayed_workh]henable_delayed_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM#ubj|)}(h(struct delayed_work *dwork)h]j)}(hstruct delayed_work *dworkh](j)}(hjh]hstruct}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.ubj)}(h h]h }(hj?hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj.ubh)}(hhh]j)}(h delayed_workh]h delayed_work}(hjPhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjRmodnameN classnameNj7j:)}j=]j@)}j3jsbc.enable_delayed_workasbuh1hhj.ubj)}(h h]h }(hjphhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj.ubjU)}(hjuh]h*}(hj~hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj.ubj)}(hdworkh]hdwork}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj.ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj*ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM#ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM#ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM#hjhhubjC)}(hhh]h)}(hEnable a delayed work itemh]hEnable a delayed work item}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM#hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM#ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct delayed_work *dwork`` delayed work item to enable **Description** enable_work() for delayed work items.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM'hjubjg)}(hhh]jl)}(h;``struct delayed_work *dwork`` delayed work item to enable h](jr)}(h``struct delayed_work *dwork``h]j)}(hjh]hstruct delayed_work *dwork}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM$hjubj)}(hhh]h)}(hdelayed work item to enableh]hdelayed work item to enable}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hM$hj ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj hM$hjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj1h]h Description}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM&hjubh)}(h%enable_work() for delayed work items.h]h%enable_work() for delayed work items.}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM%hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!schedule_on_each_cpu (C function)c.schedule_on_each_cpuhNtauh1jhjhhhNhNubj)}(hhh](j)}(h+int schedule_on_each_cpu (work_func_t func)h]j)}(h*int schedule_on_each_cpu(work_func_t func)h](j)}(hinth]hint}(hjvhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjrhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM/ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjrhhhjhM/ubj)}(hschedule_on_each_cpuh]j)}(hschedule_on_each_cpuh]hschedule_on_each_cpu}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjrhhhjhM/ubj|)}(h(work_func_t func)h]j)}(hwork_func_t funch](h)}(hhh]j)}(h work_func_th]h work_func_t}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.schedule_on_each_cpuasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hfunch]hfunc}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjrhhhjhM/ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjnhhhjhM/ubah}(h]jiah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM/hjkhhubjC)}(hhh]h)}(h3execute a function synchronously on each online CPUh]h3execute a function synchronously on each online CPU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM/hj hhubah}(h]h ]h"]h$]h&]uh1jBhjkhhhjhM/ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej&jfj&jgjhjiuh1jhhhjhNhNubjk)}(hX!**Parameters** ``work_func_t func`` the function to call **Description** schedule_on_each_cpu() executes **func** on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow. **Return** 0 on success, -errno on failure.h](h)}(h**Parameters**h]j)}(hj0h]h Parameters}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM3hj*ubjg)}(hhh]jl)}(h*``work_func_t func`` the function to call h](jr)}(h``work_func_t func``h]j)}(hjOh]hwork_func_t func}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM0hjIubj)}(hhh]h)}(hthe function to callh]hthe function to call}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjdhM0hjeubah}(h]h ]h"]h$]h&]uh1jhjIubeh}(h]h ]h"]h$]h&]uh1jkhjdhM0hjFubah}(h]h ]h"]h$]h&]uh1jfhj*ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM2hj*ubh)}(hschedule_on_each_cpu() executes **func** on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow.h](h schedule_on_each_cpu() executes }(hjhhhNhNubj)}(h**func**h]hfunc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh} on each online CPU using the system workqueue and blocks until all CPUs have completed. schedule_on_each_cpu() is very slow.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM1hj*ubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM5hj*ubh)}(h 0 on success, -errno on failure.h]h 0 on success, -errno on failure.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM6hj*ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j'execute_in_process_context (C function)c.execute_in_process_contexthNtauh1jhjhhhNhNubj)}(hhh](j)}(hHint execute_in_process_context (work_func_t fn, struct execute_work *ew)h]j)}(hGint execute_in_process_context(work_func_t fn, struct execute_work *ew)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMTubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMTubj)}(hexecute_in_process_contexth]j)}(hexecute_in_process_contexth]hexecute_in_process_context}(hj)hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj%ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMTubj|)}(h)(work_func_t fn, struct execute_work *ew)h](j)}(hwork_func_t fnh](h)}(hhh]j)}(h work_func_th]h work_func_t}(hjHhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjJmodnameN classnameNj7j:)}j=]j@)}j3j+sbc.execute_in_process_contextasbuh1hhjAubj)}(h h]h }(hjhhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjAubj)}(hfnh]hfn}(hjvhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjAubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj=ubj)}(hstruct execute_work *ewh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h execute_workh]h execute_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]jdc.execute_in_process_contextasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hewh]hew}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj=ubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMTubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMTubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMThjhhubjC)}(hhh]h)}(h.reliably execute the routine with user contexth]h.reliably execute the routine with user context}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMThj hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMTubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej(jfj(jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``work_func_t fn`` the function to execute ``struct execute_work *ew`` guaranteed storage for the execute work structure (must be available when the work executes) **Description** Executes the function immediately if process context is available, otherwise schedules the function for delayed execution. **Return** 0 - function was executed 1 - function was scheduled for executionh](h)}(h**Parameters**h]j)}(hj2h]h Parameters}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMXhj,ubjg)}(hhh](jl)}(h+``work_func_t fn`` the function to execute h](jr)}(h``work_func_t fn``h]j)}(hjQh]hwork_func_t fn}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMUhjKubj)}(hhh]h)}(hthe function to executeh]hthe function to execute}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjfhMUhjgubah}(h]h ]h"]h$]h&]uh1jhjKubeh}(h]h ]h"]h$]h&]uh1jkhjfhMUhjHubjl)}(hy``struct execute_work *ew`` guaranteed storage for the execute work structure (must be available when the work executes) h](jr)}(h``struct execute_work *ew``h]j)}(hjh]hstruct execute_work *ew}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMWhjubj)}(hhh]h)}(h\guaranteed storage for the execute work structure (must be available when the work executes)h]h\guaranteed storage for the execute work structure (must be available when the work executes)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMVhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMWhjHubeh}(h]h ]h"]h$]h&]uh1jfhj,ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMYhj,ubh)}(hzExecutes the function immediately if process context is available, otherwise schedules the function for delayed execution.h]hzExecutes the function immediately if process context is available, otherwise schedules the function for delayed execution.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMXhj,ubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM[hj,ubh)}(hB0 - function was executed 1 - function was scheduled for executionh]hB0 - function was executed 1 - function was scheduled for execution}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM\hj,ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!free_workqueue_attrs (C function)c.free_workqueue_attrshNtauh1jhjhhhNhNubj)}(hhh](j)}(h9void free_workqueue_attrs (struct workqueue_attrs *attrs)h]j)}(h8void free_workqueue_attrs(struct workqueue_attrs *attrs)h](j)}(hvoidh]hvoid}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMnubj)}(h h]h }(hjAhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj.hhhj@hMnubj)}(hfree_workqueue_attrsh]j)}(hfree_workqueue_attrsh]hfree_workqueue_attrs}(hjShhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjOubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj.hhhj@hMnubj|)}(h(struct workqueue_attrs *attrs)h]j)}(hstruct workqueue_attrs *attrsh](j)}(hjh]hstruct}(hjohhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjkubj)}(h h]h }(hj|hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjkubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jUsbc.free_workqueue_attrsasbuh1hhjkubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjkubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjkubj)}(hattrsh]hattrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjkubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjgubah}(h]h ]h"]h$]h&]jjuh1j{hj.hhhj@hMnubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj*hhhj@hMnubah}(h]j%ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj@hMnhj'hhubjC)}(hhh]h)}(hfree a workqueue_attrsh]hfree a workqueue_attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMnhjhhubah}(h]h ]h"]h$]h&]uh1jBhj'hhhj@hMnubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej jfj jgjhjiuh1jhhhjhNhNubjk)}(h{**Parameters** ``struct workqueue_attrs *attrs`` workqueue_attrs to free **Description** Undo alloc_workqueue_attrs().h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMrhjubjg)}(hhh]jl)}(h:``struct workqueue_attrs *attrs`` workqueue_attrs to free h](jr)}(h!``struct workqueue_attrs *attrs``h]j)}(hj3h]hstruct workqueue_attrs *attrs}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMohj-ubj)}(hhh]h)}(hworkqueue_attrs to freeh]hworkqueue_attrs to free}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjHhMohjIubah}(h]h ]h"]h$]h&]uh1jhj-ubeh}(h]h ]h"]h$]h&]uh1jkhjHhMohj*ubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjnh]h Description}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMqhjubh)}(hUndo alloc_workqueue_attrs().h]hUndo alloc_workqueue_attrs().}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMphjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"alloc_workqueue_attrs (C function)c.alloc_workqueue_attrshNtauh1jhjhhhNhNubj)}(hhh](j)}(h5struct workqueue_attrs * alloc_workqueue_attrs (void)h]j)}(h3struct workqueue_attrs *alloc_workqueue_attrs(void)h](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM}ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM}ubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3alloc_workqueue_attrssbc.alloc_workqueue_attrsasbuh1hhjhhhjhM}ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM}ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjhhhjhM}ubj)}(halloc_workqueue_attrsh]j)}(hjh]halloc_workqueue_attrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM}ubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hj-hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]noemphjjuh1jhj%ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM}ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM}ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM}hjhhubjC)}(hhh]h)}(hallocate a workqueue_attrsh]hallocate a workqueue_attrs}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM}hjThhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM}ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejojfjojgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``void`` no arguments **Description** Allocate a new workqueue_attrs, initialize with default settings and return it. **Return** The allocated new workqueue_attr on success. ``NULL`` on failure.h](h)}(h**Parameters**h]j)}(hjyh]h Parameters}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjsubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubah}(h]h ]h"]h$]h&]uh1jfhjsubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjsubh)}(hOAllocate a new workqueue_attrs, initialize with default settings and return it.h]hOAllocate a new workqueue_attrs, initialize with default settings and return it.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM~hjsubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjsubh)}(hAThe allocated new workqueue_attr on success. ``NULL`` on failure.h](h-The allocated new workqueue_attr on success. }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh on failure.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjsubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jinit_worker_pool (C function)c.init_worker_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h/int init_worker_pool (struct worker_pool *pool)h]j)}(h.int init_worker_pool(struct worker_pool *pool)h](j)}(hinth]hint}(hjQhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjMhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj`hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjMhhhj_hMubj)}(hinit_worker_poolh]j)}(hinit_worker_poolh]hinit_worker_pool}(hjrhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjnubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjMhhhj_hMubj|)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jtsbc.init_worker_poolasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjMhhhj_hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjIhhhj_hMubah}(h]jDah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj_hMhjFhhubjC)}(hhh]h)}(h'initialize a newly zalloc'd worker_poolh]h)initialize a newly zalloc’d worker_pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjFhhhj_hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej)jfj)jgjhjiuh1jhhhjhNhNubjk)}(hX]**Parameters** ``struct worker_pool *pool`` worker_pool to initialize **Description** Initialize a newly zalloc'd **pool**. It also allocates **pool->attrs**. **Return** 0 on success, -errno on failure. Even on failure, all fields inside **pool** proper are initialized and put_unbound_pool() can be called on **pool** safely to release it.h](h)}(h**Parameters**h]j)}(hj3h]h Parameters}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj-ubjg)}(hhh]jl)}(h7``struct worker_pool *pool`` worker_pool to initialize h](jr)}(h``struct worker_pool *pool``h]j)}(hjRh]hstruct worker_pool *pool}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjPubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjLubj)}(hhh]h)}(hworker_pool to initializeh]hworker_pool to initialize }(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjghMhjhubah}(h]h ]h"]h$]h&]uh1jhjLubeh}(h]h ]h"]h$]h&]uh1jkhjghMhjIubah}(h]h ]h"]h$]h&]uh1jfhj-ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj-ubh)}(hIInitialize a newly zalloc'd **pool**. It also allocates **pool->attrs**.h](hInitialize a newly zalloc’d }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. It also allocates }(hjhhhNhNubj)}(h**pool->attrs**h]h pool->attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj-ubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj-ubh)}(h0 on success, -errno on failure. Even on failure, all fields inside **pool** proper are initialized and put_unbound_pool() can be called on **pool** safely to release it.h](hE0 on success, -errno on failure. Even on failure, all fields inside }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh@ proper are initialized and put_unbound_pool() can be called on }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh safely to release it.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj-ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jput_unbound_pool (C function)c.put_unbound_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h0void put_unbound_pool (struct worker_pool *pool)h]j)}(h/void put_unbound_pool(struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjPhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj=hhhjOhMubj)}(hput_unbound_poolh]j)}(hput_unbound_poolh]hput_unbound_pool}(hjbhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj^ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj=hhhjOhMubj|)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hj~hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjzubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjzubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jdsbc.put_unbound_poolasbuh1hhjzubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjzubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjzubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjzubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjvubah}(h]h ]h"]h$]h&]jjuh1j{hj=hhhjOhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj9hhhjOhMubah}(h]j4ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjOhMhj6hhubjC)}(hhh]h)}(hput a worker_poolh]hput a worker_pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhj6hhhjOhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hXz**Parameters** ``struct worker_pool *pool`` worker_pool to put **Description** Put **pool**. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool(). Should be called with wq_pool_mutex held.h](h)}(h**Parameters**h]j)}(hj#h]h Parameters}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(h0``struct worker_pool *pool`` worker_pool to put h](jr)}(h``struct worker_pool *pool``h]j)}(hjBh]hstruct worker_pool *pool}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj<ubj)}(hhh]h)}(hworker_pool to puth]hworker_pool to put}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjWhMhjXubah}(h]h ]h"]h$]h&]uh1jhj<ubeh}(h]h ]h"]h$]h&]uh1jkhjWhMhj9ubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj}h]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hPut **pool**. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool().h](hPut }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. If its refcnt reaches zero, it gets destroyed in RCU safe manner. get_unbound_pool() calls this function on its failure path and this function should be able to release pools which went through, successfully or not, init_worker_pool().}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h)Should be called with wq_pool_mutex held.h]h)Should be called with wq_pool_mutex held.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jget_unbound_pool (C function)c.get_unbound_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(hKstruct worker_pool * get_unbound_pool (const struct workqueue_attrs *attrs)h]j)}(hIstruct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)h](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3get_unbound_poolsbc.get_unbound_poolasbuh1hhjhhhjhMubj)}(h h]h }(hj#hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubjU)}(hjuh]h*}(hj1hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjhhhjhMubj)}(hget_unbound_poolh]j)}(hj h]hget_unbound_pool}(hjBhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj>ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMubj|)}(h%(const struct workqueue_attrs *attrs)h]j)}(h#const struct workqueue_attrs *attrsh](j)}(hjh]hconst}(hj]hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjYubj)}(h h]h }(hjjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjYubj)}(hjh]hstruct}(hjxhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjYubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjYubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]jc.get_unbound_poolasbuh1hhjYubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjYubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjYubj)}(hattrsh]hattrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjYubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjUubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjhhubjC)}(hhh]h)}(h/get a worker_pool with the specified attributesh]h/get a worker_pool with the specified attributes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``const struct workqueue_attrs *attrs`` the attributes of the worker_pool to get **Description** Obtain a worker_pool which has the same attributes as **attrs**, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one. Should be called with wq_pool_mutex held. **Return** On success, a worker_pool with the same attributes as **attrs**. On failure, ``NULL``.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(hQ``const struct workqueue_attrs *attrs`` the attributes of the worker_pool to get h](jr)}(h'``const struct workqueue_attrs *attrs``h]j)}(hj:h]h#const struct workqueue_attrs *attrs}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj8ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj4ubj)}(hhh]h)}(h(the attributes of the worker_pool to geth]h(the attributes of the worker_pool to get}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhjOhMhjPubah}(h]h ]h"]h$]h&]uh1jhj4ubeh}(h]h ]h"]h$]h&]uh1jkhjOhMhj1ubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjuh]h Description}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hObtain a worker_pool which has the same attributes as **attrs**, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one.h](h6Obtain a worker_pool which has the same attributes as }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, bump the reference count and return it. If there already is a matching worker_pool, it will be used; otherwise, this function attempts to create a new one.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h)Should be called with wq_pool_mutex held.h]h)Should be called with wq_pool_mutex held.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hVOn success, a worker_pool with the same attributes as **attrs**. On failure, ``NULL``.h](h6On success, a worker_pool with the same attributes as }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. On failure, }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j wq_calc_pod_cpumask (C function)c.wq_calc_pod_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(hAvoid wq_calc_pod_cpumask (struct workqueue_attrs *attrs, int cpu)h]j)}(h@void wq_calc_pod_cpumask(struct workqueue_attrs *attrs, int cpu)h](j)}(hvoidh]hvoid}(hj&hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj5hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj"hhhj4hMubj)}(hwq_calc_pod_cpumaskh]j)}(hwq_calc_pod_cpumaskh]hwq_calc_pod_cpumask}(hjGhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjCubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj"hhhj4hMubj|)}(h((struct workqueue_attrs *attrs, int cpu)h](j)}(hstruct workqueue_attrs *attrsh](j)}(hjh]hstruct}(hjchhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj_ubj)}(h h]h }(hjphhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj_ubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jIsbc.wq_calc_pod_cpumaskasbuh1hhj_ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj_ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj_ubj)}(hattrsh]hattrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj_ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj[ubj)}(hint cpuh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj[ubeh}(h]h ]h"]h$]h&]jjuh1j{hj"hhhj4hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhj4hMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj4hMhjhhubjC)}(hhh]h)}(h'calculate a wq_attrs' cpumask for a podh]h)calculate a wq_attrs’ cpumask for a pod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhj4hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej3jfj3jgjhjiuh1jhhhjhNhNubjk)}(hXK**Parameters** ``struct workqueue_attrs *attrs`` the wq_attrs of the default pwq of the target workqueue ``int cpu`` the target CPU **Description** Calculate the cpumask a workqueue with **attrs** should use on **pod**. The result is stored in **attrs->__pod_cpumask**. If pod affinity is not enabled, **attrs->cpumask** is always used. If enabled and **pod** has online CPUs requested by **attrs**, the returned cpumask is the intersection of the possible CPUs of **pod** and **attrs->cpumask**. The caller is responsible for ensuring that the cpumask of **pod** stays stable.h](h)}(h**Parameters**h]j)}(hj=h]h Parameters}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj7ubjg)}(hhh](jl)}(hZ``struct workqueue_attrs *attrs`` the wq_attrs of the default pwq of the target workqueue h](jr)}(h!``struct workqueue_attrs *attrs``h]j)}(hj\h]hstruct workqueue_attrs *attrs}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjVubj)}(hhh]h)}(h7the wq_attrs of the default pwq of the target workqueueh]h7the wq_attrs of the default pwq of the target workqueue}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjqhMhjrubah}(h]h ]h"]h$]h&]uh1jhjVubeh}(h]h ]h"]h$]h&]uh1jkhjqhMhjSubjl)}(h``int cpu`` the target CPU h](jr)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hthe target CPUh]hthe target CPU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjSubeh}(h]h ]h"]h$]h&]uh1jfhj7ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj7ubh)}(hyCalculate the cpumask a workqueue with **attrs** should use on **pod**. The result is stored in **attrs->__pod_cpumask**.h](h'Calculate the cpumask a workqueue with }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh should use on }(hjhhhNhNubj)}(h**pod**h]hpod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. The result is stored in }(hjhhhNhNubj)}(h**attrs->__pod_cpumask**h]hattrs->__pod_cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj7ubh)}(hIf pod affinity is not enabled, **attrs->cpumask** is always used. If enabled and **pod** has online CPUs requested by **attrs**, the returned cpumask is the intersection of the possible CPUs of **pod** and **attrs->cpumask**.h](h If pod affinity is not enabled, }(hj+hhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj+ubh is always used. If enabled and }(hj+hhhNhNubj)}(h**pod**h]hpod}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj+ubh has online CPUs requested by }(hj+hhhNhNubj)}(h **attrs**h]hattrs}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj+ubhC, the returned cpumask is the intersection of the possible CPUs of }(hj+hhhNhNubj)}(h**pod**h]hpod}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhj+ubh and }(hj+hhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj+ubh.}(hj+hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj7ubh)}(hPThe caller is responsible for ensuring that the cpumask of **pod** stays stable.h](h;The caller is responsible for ensuring that the cpumask of }(hjhhhNhNubj)}(h**pod**h]hpod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh stays stable.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj7ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"apply_workqueue_attrs (C function)c.apply_workqueue_attrshNtauh1jhjhhhNhNubj)}(hhh](j)}(h\int apply_workqueue_attrs (struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h]j)}(h[int apply_workqueue_attrs(struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMyubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMyubj)}(happly_workqueue_attrsh]j)}(happly_workqueue_attrsh]happly_workqueue_attrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMyubj|)}(hB(struct workqueue_struct *wq, const struct workqueue_attrs *attrs)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj0hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj2modnameN classnameNj7j:)}j=]j@)}j3jsbc.apply_workqueue_attrsasbuh1hhjubj)}(h h]h }(hjPhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj^hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjkhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubj)}(h#const struct workqueue_attrs *attrsh](j)}(hjh]hconst}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_attrsh]hworkqueue_attrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]jLc.apply_workqueue_attrsasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hattrsh]hattrs}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMyubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMyubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMyhjhhubjC)}(hhh]h)}(h1apply new workqueue_attrs to an unbound workqueueh]h1apply new workqueue_attrs to an unbound workqueue}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMyhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMyubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej8jfj8jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` the target workqueue ``const struct workqueue_attrs *attrs`` the workqueue_attrs to apply, allocated with alloc_workqueue_attrs() **Description** Apply **attrs** to an unbound workqueue **wq**. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in **attrs->cpumask** so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq. Performs GFP_KERNEL allocations. **Return** 0 on success and -errno on failure.h](h)}(h**Parameters**h]j)}(hjBh]h Parameters}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM}hj<ubjg)}(hhh](jl)}(h5``struct workqueue_struct *wq`` the target workqueue h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjah]hstruct workqueue_struct *wq}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMzhj[ubj)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjvhMzhjwubah}(h]h ]h"]h$]h&]uh1jhj[ubeh}(h]h ]h"]h$]h&]uh1jkhjvhMzhjXubjl)}(hm``const struct workqueue_attrs *attrs`` the workqueue_attrs to apply, allocated with alloc_workqueue_attrs() h](jr)}(h'``const struct workqueue_attrs *attrs``h]j)}(hjh]h#const struct workqueue_attrs *attrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM{hjubj)}(hhh]h)}(hDthe workqueue_attrs to apply, allocated with alloc_workqueue_attrs()h]hDthe workqueue_attrs to apply, allocated with alloc_workqueue_attrs()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM{hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM{hjXubeh}(h]h ]h"]h$]h&]uh1jfhj<ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM}hj<ubh)}(hXqApply **attrs** to an unbound workqueue **wq**. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in **attrs->cpumask** so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq.h](hApply }(hjhhhNhNubj)}(h **attrs**h]hattrs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to an unbound workqueue }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh\. Unless disabled, this function maps a separate pwq to each CPU pod with possibles CPUs in }(hjhhhNhNubj)}(h**attrs->cpumask**h]hattrs->cpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh so that work items are affine to the pod it was issued on. Older pwqs are released as in-flight work items finish. Note that a work item which repeatedly requeues itself back-to-back will stay on its current pwq.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM|hj<ubh)}(h Performs GFP_KERNEL allocations.h]h Performs GFP_KERNEL allocations.}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj<ubh)}(h **Return**h]j)}(hjAh]hReturn}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj<ubh)}(h#0 on success and -errno on failure.h]h#0 on success and -errno on failure.}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj<ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"unbound_wq_update_pwq (C function)c.unbound_wq_update_pwqhNtauh1jhjhhhNhNubj)}(hhh](j)}(hAvoid unbound_wq_update_pwq (struct workqueue_struct *wq, int cpu)h]j)}(h@void unbound_wq_update_pwq(struct workqueue_struct *wq, int cpu)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubj)}(hunbound_wq_update_pwqh]j)}(hunbound_wq_update_pwqh]hunbound_wq_update_pwq}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMubj|)}(h&(struct workqueue_struct *wq, int cpu)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.unbound_wq_update_pwqasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint cpuh](j)}(hinth]hint}(hj5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1ubj)}(h h]h }(hjChhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj1ubj)}(hcpuh]hcpu}(hjQhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj1ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj~hhhjhMubah}(h]jyah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhj{hhubjC)}(hhh]h)}(h%update a pwq slot for CPU hot[un]plugh]h%update a pwq slot for CPU hot[un]plug}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjxhhubah}(h]h ]h"]h$]h&]uh1jBhj{hhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hXj**Parameters** ``struct workqueue_struct *wq`` the target workqueue ``int cpu`` the CPU to update the pwq slot for **Description** This function is to be called from ``CPU_DOWN_PREPARE``, ``CPU_ONLINE`` and ``CPU_DOWN_FAILED``. **cpu** is in the same pod of the CPU being hot[un]plugged. If pod affinity can't be adjusted due to memory allocation failure, it falls back to **wq->dfl_pwq** which may not be optimal but is always correct. Note that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it's the user's responsibility to flush the work item from CPU_DOWN_PREPARE.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh](jl)}(h5``struct workqueue_struct *wq`` the target workqueue h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hthe target workqueueh]hthe target workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubjl)}(h/``int cpu`` the CPU to update the pwq slot for h](jr)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(h"the CPU to update the pwq slot forh]h"the CPU to update the pwq slot for}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhj ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj hMhjubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj0h]h Description}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hThis function is to be called from ``CPU_DOWN_PREPARE``, ``CPU_ONLINE`` and ``CPU_DOWN_FAILED``. **cpu** is in the same pod of the CPU being hot[un]plugged.h](h#This function is to be called from }(hjFhhhNhNubj)}(h``CPU_DOWN_PREPARE``h]hCPU_DOWN_PREPARE}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh, }(hjFhhhNhNubj)}(h``CPU_ONLINE``h]h CPU_ONLINE}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh and }(hjFhhhNhNubj)}(h``CPU_DOWN_FAILED``h]hCPU_DOWN_FAILED}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh. }(hjFhhhNhNubj)}(h**cpu**h]hcpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubh4 is in the same pod of the CPU being hot[un]plugged.}(hjFhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hIf pod affinity can't be adjusted due to memory allocation failure, it falls back to **wq->dfl_pwq** which may not be optimal but is always correct.h](hWIf pod affinity can’t be adjusted due to memory allocation failure, it falls back to }(hjhhhNhNubj)}(h**wq->dfl_pwq**h]h wq->dfl_pwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh0 which may not be optimal but is always correct.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hXNote that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it's the user's responsibility to flush the work item from CPU_DOWN_PREPARE.h]hXNote that when the last allowed CPU of a pod goes offline for a workqueue with a cpumask spanning multiple pods, the workers which were already executing the work items for the workqueue will lose their CPU affinity and may execute on any CPU. This is similar to how per-cpu workqueues behave on CPU_DOWN. If a workqueue user wants strict affinity, it’s the user’s responsibility to flush the work item from CPU_DOWN_PREPARE.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!wq_adjust_max_active (C function)c.wq_adjust_max_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(h7void wq_adjust_max_active (struct workqueue_struct *wq)h]j)}(h6void wq_adjust_max_active(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM_ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM_ubj)}(hwq_adjust_max_activeh]j)}(hwq_adjust_max_activeh]hwq_adjust_max_active}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM_ubj|)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&ubj)}(h h]h }(hj7hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj&ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjHhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjJmodnameN classnameNj7j:)}j=]j@)}j3jsbc.wq_adjust_max_activeasbuh1hhj&ubj)}(h h]h }(hjhhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj&ubjU)}(hjuh]h*}(hjvhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj&ubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj&ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj"ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM_ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM_ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM_hjhhubjC)}(hhh]h)}(h/update a wq's max_active to the current settingh]h1update a wq’s max_active to the current setting}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM_hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM_ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` target workqueue **Description** If **wq** isn't freezing, set **wq->max_active** to the saved_max_active and activate inactive work items accordingly. If **wq** is freezing, clear **wq->max_active** to zero.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMchjubjg)}(hhh]jl)}(h1``struct workqueue_struct *wq`` target workqueue h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM`hjubj)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM`hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM`hjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj)h]h Description}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj'ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMbhjubh)}(hIf **wq** isn't freezing, set **wq->max_active** to the saved_max_active and activate inactive work items accordingly. If **wq** is freezing, clear **wq->max_active** to zero.h](hIf }(hj?hhhNhNubj)}(h**wq**h]hwq}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubh isn’t freezing, set }(hj?hhhNhNubj)}(h**wq->max_active**h]hwq->max_active}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubhJ to the saved_max_active and activate inactive work items accordingly. If }(hj?hhhNhNubj)}(h**wq**h]hwq}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubh is freezing, clear }(hj?hhhNhNubj)}(h**wq->max_active**h]hwq->max_active}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj?ubh to zero.}(hj?hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMahjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jdestroy_workqueue (C function)c.destroy_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void destroy_workqueue (struct workqueue_struct *wq)h]j)}(h3void destroy_workqueue(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMvubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMvubj)}(hdestroy_workqueueh]j)}(hdestroy_workqueueh]hdestroy_workqueue}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMvubj|)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.destroy_workqueueasbuh1hhjubj)}(h h]h }(hj1hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj?hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjLhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMvubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMvubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMvhjhhubjC)}(hhh]h)}(hsafely terminate a workqueueh]hsafely terminate a workqueue}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMvhjshhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMvubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` target workqueue **Description** Safely destroy a workqueue. All work currently pending will be done first. This function does NOT guarantee that non-pending work that has been submitted with queue_delayed_work() and similar functions will be done before destroying the workqueue. The fundamental problem is that, currently, the workqueue has no way of accessing non-pending delayed_work. delayed_work is only linked on the timer-side. All delayed_work must, therefore, be canceled before calling this function. TODO: It would be better if the problem described above wouldn't exist and destroy_workqueue() would cleanly cancel all pending and non-pending delayed_work.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMzhjubjg)}(hhh]jl)}(h1``struct workqueue_struct *wq`` target workqueue h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMwhjubj)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMwhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMwhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMyhjubh)}(hJSafely destroy a workqueue. All work currently pending will be done first.h]hJSafely destroy a workqueue. All work currently pending will be done first.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMxhjubh)}(hXThis function does NOT guarantee that non-pending work that has been submitted with queue_delayed_work() and similar functions will be done before destroying the workqueue. The fundamental problem is that, currently, the workqueue has no way of accessing non-pending delayed_work. delayed_work is only linked on the timer-side. All delayed_work must, therefore, be canceled before calling this function.h]hXThis function does NOT guarantee that non-pending work that has been submitted with queue_delayed_work() and similar functions will be done before destroying the workqueue. The fundamental problem is that, currently, the workqueue has no way of accessing non-pending delayed_work. delayed_work is only linked on the timer-side. All delayed_work must, therefore, be canceled before calling this function.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMzhjubh)}(hTODO: It would be better if the problem described above wouldn't exist and destroy_workqueue() would cleanly cancel all pending and non-pending delayed_work.h]hTODO: It would be better if the problem described above wouldn’t exist and destroy_workqueue() would cleanly cancel all pending and non-pending delayed_work.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%workqueue_set_max_active (C function)c.workqueue_set_max_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(hKvoid workqueue_set_max_active (struct workqueue_struct *wq, int max_active)h]j)}(hJvoid workqueue_set_max_active(struct workqueue_struct *wq, int max_active)h](j)}(hvoidh]hvoid}(hjUhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjQhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjdhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjQhhhjchMubj)}(hworkqueue_set_max_activeh]j)}(hworkqueue_set_max_activeh]hworkqueue_set_max_active}(hjvhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjrubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjQhhhjchMubj|)}(h-(struct workqueue_struct *wq, int max_active)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jxsbc.workqueue_set_max_activeasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint max_activeh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(h max_activeh]h max_active}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjQhhhjchMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjMhhhjchMubah}(h]jHah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjchMhjJhhubjC)}(hhh]h)}(h adjust max_active of a workqueueh]h adjust max_active of a workqueue}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjGhhubah}(h]h ]h"]h$]h&]uh1jBhjJhhhjchMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejbjfjbjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` target workqueue ``int max_active`` new max_active value. **Description** Set max_active of **wq** to **max_active**. See the alloc_workqueue() function comment. **Context** Don't call from IRQ context.h](h)}(h**Parameters**h]j)}(hjlh]h Parameters}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjfubjg)}(hhh](jl)}(h1``struct workqueue_struct *wq`` target workqueue h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubjl)}(h)``int max_active`` new max_active value. h](jr)}(h``int max_active``h]j)}(hjh]hint max_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hnew max_active value.h]hnew max_active value.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubeh}(h]h ]h"]h$]h&]uh1jfhjfubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjfubh)}(hWSet max_active of **wq** to **max_active**. See the alloc_workqueue() function comment.h](hSet max_active of }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to }(hjhhhNhNubj)}(h**max_active**h]h max_active}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh-. See the alloc_workqueue() function comment.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjfubh)}(h **Context**h]j)}(hjJh]hContext}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjfubh)}(hDon't call from IRQ context.h]hDon’t call from IRQ context.}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjfubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%workqueue_set_min_active (C function)c.workqueue_set_min_activehNtauh1jhjhhhNhNubj)}(hhh](j)}(hKvoid workqueue_set_min_active (struct workqueue_struct *wq, int min_active)h]j)}(hJvoid workqueue_set_min_active(struct workqueue_struct *wq, int min_active)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubj)}(hworkqueue_set_min_activeh]j)}(hworkqueue_set_min_activeh]hworkqueue_set_min_active}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMubj|)}(h-(struct workqueue_struct *wq, int min_active)h](j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.workqueue_set_min_activeasbuh1hhjubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hj%hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hint min_activeh](j)}(hinth]hint}(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubj)}(h h]h }(hjLhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj:ubj)}(h min_activeh]h min_active}(hjZhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj:ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjhhubjC)}(hhh]h)}(h)adjust min_active of an unbound workqueueh]h)adjust min_active of an unbound workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX(**Parameters** ``struct workqueue_struct *wq`` target unbound workqueue ``int min_active`` new min_active value **Description** Set min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is ``WQ_DFL_MIN_ACTIVE`` by default. Use this function to adjust the min_active value between 0 and the current max_active.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh](jl)}(h9``struct workqueue_struct *wq`` target unbound workqueue h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(htarget unbound workqueueh]htarget unbound workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubjl)}(h(``int min_active`` new min_active value h](jr)}(h``int min_active``h]j)}(hjh]hint min_active}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hnew min_active valueh]hnew min_active value}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj9h]h Description}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj7ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hXHSet min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is ``WQ_DFL_MIN_ACTIVE`` by default.h](hX'Set min_active of an unbound workqueue. Unlike other types of workqueues, an unbound workqueue is not guaranteed to be able to process max_active interdependent work items. Instead, an unbound workqueue is guaranteed to be able to process min_active number of interdependent work items which is }(hjOhhhNhNubj)}(h``WQ_DFL_MIN_ACTIVE``h]hWQ_DFL_MIN_ACTIVE}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjOubh by default.}(hjOhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubh)}(hVUse this function to adjust the min_active value between 0 and the current max_active.h]hVUse this function to adjust the min_active value between 0 and the current max_active.}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jcurrent_work (C function)c.current_workhNtauh1jhjhhhNhNubj)}(hhh](j)}(h(struct work_struct * current_work (void)h]j)}(h&struct work_struct *current_work(void)h](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubh)}(hhh]j)}(h work_structh]h work_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3 current_worksbc.current_workasbuh1hhjhhhjhMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjhhhjhMubj)}(h current_workh]j)}(hjh]h current_work}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjhhubjC)}(hhh]h)}(h'retrieve ``current`` task's work structh](h retrieve }(hjChhhNhNubj)}(h ``current``h]hcurrent}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCubh task’s work struct}(hjChhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj@hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejmjfjmjgjhjiuh1jhhhjhNhNubjk)}(hX'**Parameters** ``void`` no arguments **Description** Determine if ``current`` task is a workqueue worker and what it's working on. Useful to find out the context that the ``current`` task is running in. **Return** work struct if ``current`` task is a workqueue worker, ``NULL`` otherwise.h](h)}(h**Parameters**h]j)}(hjwh]h Parameters}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjuubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjqubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubah}(h]h ]h"]h$]h&]uh1jfhjqubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjqubh)}(hDetermine if ``current`` task is a workqueue worker and what it's working on. Useful to find out the context that the ``current`` task is running in.h](h Determine if }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh` task is a workqueue worker and what it’s working on. Useful to find out the context that the }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh task is running in.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjqubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjqubh)}(hJwork struct if ``current`` task is a workqueue worker, ``NULL`` otherwise.h](hwork struct if }(hj2hhhNhNubj)}(h ``current``h]hcurrent}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2ubh task is a workqueue worker, }(hj2hhhNhNubj)}(h``NULL``h]hNULL}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2ubh otherwise.}(hj2hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjqubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j)current_is_workqueue_rescuer (C function)c.current_is_workqueue_rescuerhNtauh1jhjhhhNhNubj)}(hhh](j)}(h(bool current_is_workqueue_rescuer (void)h]j)}(h'bool current_is_workqueue_rescuer(void)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubj)}(hcurrent_is_workqueue_rescuerh]j)}(hcurrent_is_workqueue_rescuerh]hcurrent_is_workqueue_rescuer}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj}hhhjhMubah}(h]jxah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjzhhubjC)}(hhh]h)}(h!is ``current`` workqueue rescuer?h](his }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh workqueue rescuer?}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjzhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``void`` no arguments **Description** Determine whether ``current`` is a workqueue rescuer. Can be used from work functions to determine whether it's being run off the rescuer task. **Return** ``true`` if ``current`` is a workqueue rescuer. ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM#hjubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hj>h]hvoid}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhj8ubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjShKhjTubah}(h]h ]h"]h$]h&]uh1jhj8ubeh}(h]h ]h"]h$]h&]uh1jkhjShKhj5ubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjyh]h Description}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubh)}(hDetermine whether ``current`` is a workqueue rescuer. Can be used from work functions to determine whether it's being run off the rescuer task.h](hDetermine whether }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhu is a workqueue rescuer. Can be used from work functions to determine whether it’s being run off the rescuer task.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM#hjubh)}(hD``true`` if ``current`` is a workqueue rescuer. ``false`` otherwise.h](j)}(h``true``h]htrue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if }(hjhhhNhNubj)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is a workqueue rescuer. }(hjhhhNhNubj)}(h ``false``h]hfalse}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh otherwise.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM$hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j workqueue_congested (C function)c.workqueue_congestedhNtauh1jhjhhhNhNubj)}(hhh](j)}(h?bool workqueue_congested (int cpu, struct workqueue_struct *wq)h]j)}(h>bool workqueue_congested(int cpu, struct workqueue_struct *wq)h](j)}(hj*h]hbool}(hj)hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM.ubj)}(h h]h }(hj7hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj%hhhj6hM.ubj)}(hworkqueue_congestedh]j)}(hworkqueue_congestedh]hworkqueue_congested}(hjIhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjEubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj%hhhj6hM.ubj|)}(h&(int cpu, struct workqueue_struct *wq)h](j)}(hint cpuh](j)}(hinth]hint}(hjehhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjaubj)}(h h]h }(hjshhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjaubj)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjaubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj]ubj)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jKsbc.workqueue_congestedasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj]ubeh}(h]h ]h"]h$]h&]jjuh1j{hj%hhhj6hM.ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj!hhhj6hM.ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj6hM.hjhhubjC)}(hhh]h)}(h%test whether a workqueue is congestedh]h%test whether a workqueue is congested}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM.hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhj6hM.ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej5jfj5jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``int cpu`` CPU in question ``struct workqueue_struct *wq`` target workqueue **Description** Test whether **wq**'s cpu workqueue for **cpu** is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging. If **cpu** is WORK_CPU_UNBOUND, the test is performed on the local CPU. With the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn't mean that the workqueue is contested on any other CPUs. **Return** ``true`` if congested, ``false`` otherwise.h](h)}(h**Parameters**h]j)}(hj?h]h Parameters}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj=ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM2hj9ubjg)}(hhh](jl)}(h``int cpu`` CPU in question h](jr)}(h ``int cpu``h]j)}(hj^h]hint cpu}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM/hjXubj)}(hhh]h)}(hCPU in questionh]hCPU in question}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjshM/hjtubah}(h]h ]h"]h$]h&]uh1jhjXubeh}(h]h ]h"]h$]h&]uh1jkhjshM/hjUubjl)}(h1``struct workqueue_struct *wq`` target workqueue h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjh]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM0hjubj)}(hhh]h)}(htarget workqueueh]htarget workqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM0hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM0hjUubeh}(h]h ]h"]h$]h&]uh1jfhj9ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM2hj9ubh)}(hTest whether **wq**'s cpu workqueue for **cpu** is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.h](h Test whether }(hjhhhNhNubj)}(h**wq**h]hwq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh’s cpu workqueue for }(hjhhhNhNubj)}(h**cpu**h]hcpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM1hj9ubh)}(hGIf **cpu** is WORK_CPU_UNBOUND, the test is performed on the local CPU.h](hIf }(hjhhhNhNubj)}(h**cpu**h]hcpu}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh= is WORK_CPU_UNBOUND, the test is performed on the local CPU.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM5hj9ubh)}(hWith the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn't mean that the workqueue is contested on any other CPUs.h]hWith the exception of ordered workqueues, all workqueues have per-cpu pool_workqueues, each with its own congested state. A workqueue being congested on one CPU doesn’t mean that the workqueue is contested on any other CPUs.}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM7hj9ubh)}(h **Return**h]j)}(hjMh]hReturn}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM<hj9ubh)}(h+``true`` if congested, ``false`` otherwise.h](j)}(h``true``h]htrue}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubh if congested, }(hjchhhNhNubj)}(h ``false``h]hfalse}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubh otherwise.}(hjchhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM=hj9ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwork_busy (C function) c.work_busyhNtauh1jhjhhhNhNubj)}(hhh](j)}(h1unsigned int work_busy (struct work_struct *work)h]j)}(h0unsigned int work_busy(struct work_struct *work)h](j)}(hunsignedh]hunsigned}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMTubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMTubj)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMTubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMTubj)}(h work_busyh]j)}(h work_busyh]h work_busy}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMTubj|)}(h(struct work_struct *work)h]j)}(hstruct work_struct *workh](j)}(hjh]hstruct}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h work_structh]h work_struct}(hj)hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj+modnameN classnameNj7j:)}j=]j@)}j3jsb c.work_busyasbuh1hhjubj)}(h h]h }(hjIhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjWhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hworkh]hwork}(hjdhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMTubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMTubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMThjhhubjC)}(hhh]h)}(h3test whether a work is currently pending or runningh]h3test whether a work is currently pending or running}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMThjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMTubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hXD**Parameters** ``struct work_struct *work`` the work to be tested **Description** Test whether **work** is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging. **Return** OR'd bitmask of WORK_BUSY_* bits.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMXhjubjg)}(hhh]jl)}(h3``struct work_struct *work`` the work to be tested h](jr)}(h``struct work_struct *work``h]j)}(hjh]hstruct work_struct *work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMUhjubj)}(hhh]h)}(hthe work to be testedh]hthe work to be tested}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMUhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMUhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj h]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMWhjubh)}(hTest whether **work** is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.h](h Test whether }(hj hhhNhNubj)}(h**work**h]hwork}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMVhjubh)}(h **Return**h]j)}(hjCh]hReturn}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMZhjubh)}(h!OR'd bitmask of WORK_BUSY_* bits.h]h#OR’d bitmask of WORK_BUSY_* bits.}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM[hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jset_worker_desc (C function)c.set_worker_deschNtauh1jhjhhhNhNubj)}(hhh](j)}(h+void set_worker_desc (const char *fmt, ...)h]j)}(h*void set_worker_desc(const char *fmt, ...)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMvubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMvubj)}(hset_worker_desch]j)}(hset_worker_desch]hset_worker_desc}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMvubj|)}(h(const char *fmt, ...)h](j)}(hconst char *fmth](j)}(hjh]hconst}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hcharh]hchar}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hfmth]hfmt}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(h...h]jU)}(hjph]h...}(hj"hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMvubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMvubah}(h]j{ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMvhj}hhubjC)}(hhh]h)}(h)set description for the current work itemh]h)set description for the current work item}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMvhjHhhubah}(h]h ]h"]h$]h&]uh1jBhj}hhhjhMvubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejcjfjcjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``const char *fmt`` printf-style format string ``...`` arguments for the format string **Description** This function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing '\0'.h](h)}(h**Parameters**h]j)}(hjmh]h Parameters}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMzhjgubjg)}(hhh](jl)}(h/``const char *fmt`` printf-style format string h](jr)}(h``const char *fmt``h]j)}(hjh]hconst char *fmt}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMwhjubj)}(hhh]h)}(hprintf-style format stringh]hprintf-style format string}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMwhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMwhjubjl)}(h(``...`` arguments for the format string h](jr)}(h``...``h]j)}(hjh]h...}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMxhjubj)}(hhh]h)}(harguments for the format stringh]harguments for the format string}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMxhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMxhjubeh}(h]h ]h"]h$]h&]uh1jfhjgubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMzhjgubh)}(hXThis function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing '\0'.h]hXThis function can be called by a running work function to describe what the work item is about. If the worker task gets dumped, this information will be printed out together to help debugging. The description can be at most WORKER_DESC_LEN including the trailing ‘0’.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMyhjgubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jprint_worker_info (C function)c.print_worker_infohNtauh1jhjhhhNhNubj)}(hhh](j)}(hFvoid print_worker_info (const char *log_lvl, struct task_struct *task)h]j)}(hEvoid print_worker_info(const char *log_lvl, struct task_struct *task)h](j)}(hvoidh]hvoid}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjThhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjAhhhjShMubj)}(hprint_worker_infoh]j)}(hprint_worker_infoh]hprint_worker_info}(hjfhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjbubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjAhhhjShMubj|)}(h/(const char *log_lvl, struct task_struct *task)h](j)}(hconst char *log_lvlh](j)}(hjh]hconst}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj~ubj)}(hcharh]hchar}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj~ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj~ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj~ubj)}(hlog_lvlh]hlog_lvl}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj~ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjzubj)}(hstruct task_struct *taskh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h task_structh]h task_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jhsbc.print_worker_infoasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj+hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(htaskh]htask}(hj8hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjzubeh}(h]h ]h"]h$]h&]jjuh1j{hjAhhhjShMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj=hhhjShMubah}(h]j8ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjShMhj:hhubjC)}(hhh]h)}(h,print out worker information and descriptionh]h,print out worker information and description}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj_hhubah}(h]h ]h"]h$]h&]uh1jBhj:hhhjShMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejzjfjzjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``const char *log_lvl`` the log level to use when printing ``struct task_struct *task`` target task **Description** If **task** is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item. This function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn't synchronized and may print out mixups or garbages of limited length.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj~ubjg)}(hhh](jl)}(h;``const char *log_lvl`` the log level to use when printing h](jr)}(h``const char *log_lvl``h]j)}(hjh]hconst char *log_lvl}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(h"the log level to use when printingh]h"the log level to use when printing}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubjl)}(h)``struct task_struct *task`` target task h](jr)}(h``struct task_struct *task``h]j)}(hjh]hstruct task_struct *task}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(h target taskh]h target task}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubeh}(h]h ]h"]h$]h&]uh1jfhj~ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj~ubh)}(hIf **task** is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item.h](hIf }(hj-hhhNhNubj)}(h**task**h]htask}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubh is a worker and currently executing a work item, print out the name of the workqueue being serviced and worker description set with set_worker_desc() by the currently executing work item.}(hj-hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj~ubh)}(hThis function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn't synchronized and may print out mixups or garbages of limited length.h]hThis function can be safely called on any task as long as the task_struct itself is accessible. While safe, this function isn’t synchronized and may print out mixups or garbages of limited length.}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj~ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jshow_one_workqueue (C function)c.show_one_workqueuehNtauh1jhjhhhNhNubj)}(hhh](j)}(h5void show_one_workqueue (struct workqueue_struct *wq)h]j)}(h4void show_one_workqueue(struct workqueue_struct *wq)h](j)}(hvoidh]hvoid}(hj}hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjyhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMPubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjyhhhjhMPubj)}(hshow_one_workqueueh]j)}(hshow_one_workqueueh]hshow_one_workqueue}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjyhhhjhMPubj|)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.show_one_workqueueasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hwqh]hwq}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjyhhhjhMPubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjuhhhjhMPubah}(h]jpah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMPhjrhhubjC)}(hhh]h)}(h!dump state of specified workqueueh]h!dump state of specified workqueue}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMPhj:hhubah}(h]h ]h"]h$]h&]uh1jBhjrhhhjhMPubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejUjfjUjgjhjiuh1jhhhjhNhNubjk)}(hW**Parameters** ``struct workqueue_struct *wq`` workqueue whose state will be printedh](h)}(h**Parameters**h]j)}(hj_h]h Parameters}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMThjYubjg)}(hhh]jl)}(hE``struct workqueue_struct *wq`` workqueue whose state will be printedh](jr)}(h``struct workqueue_struct *wq``h]j)}(hj~h]hstruct workqueue_struct *wq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMVhjxubj)}(hhh]h)}(h%workqueue whose state will be printedh]h%workqueue whose state will be printed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMQhjubah}(h]h ]h"]h$]h&]uh1jhjxubeh}(h]h ]h"]h$]h&]uh1jkhjhMVhjuubah}(h]h ]h"]h$]h&]uh1jfhjYubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!show_one_worker_pool (C function)c.show_one_worker_poolhNtauh1jhjhhhNhNubj)}(hhh](j)}(h4void show_one_worker_pool (struct worker_pool *pool)h]j)}(h3void show_one_worker_pool(struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM|ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM|ubj)}(hshow_one_worker_poolh]j)}(hshow_one_worker_poolh]hshow_one_worker_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM|ubj|)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hj"hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hj3hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj5modnameN classnameNj7j:)}j=]j@)}j3jsbc.show_one_worker_poolasbuh1hhjubj)}(h h]h }(hjShhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjahhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hpoolh]hpool}(hjnhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM|ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM|ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM|hjhhubjC)}(hhh]h)}(h#dump state of specified worker poolh]h#dump state of specified worker pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM|hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM|ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hV**Parameters** ``struct worker_pool *pool`` worker pool whose state will be printedh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(hD``struct worker_pool *pool`` worker pool whose state will be printedh](jr)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(h'worker pool whose state will be printedh]h'worker pool whose state will be printed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM}hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhjubah}(h]h ]h"]h$]h&]uh1jfhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j show_all_workqueues (C function)c.show_all_workqueueshNtauh1jhjhhhNhNubj)}(hhh](j)}(hvoid show_all_workqueues (void)h]j)}(hvoid show_all_workqueues(void)h](j)}(hvoidh]hvoid}(hj3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjBhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/hhhjAhMubj)}(hshow_all_workqueuesh]j)}(hshow_all_workqueuesh]hshow_all_workqueues}(hjThhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjPubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj/hhhjAhMubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjphhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjlubah}(h]h ]h"]h$]h&]noemphjjuh1jhjhubah}(h]h ]h"]h$]h&]jjuh1j{hj/hhhjAhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj+hhhjAhMubah}(h]j&ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjAhMhj(hhubjC)}(hhh]h)}(hdump workqueue stateh]hdump workqueue state}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhj(hhhjAhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``void`` no arguments **Description** Called from a sysrq handler and prints out all busy workqueues and pools.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubh)}(hICalled from a sysrq handler and prints out all busy workqueues and pools.h]hICalled from a sysrq handler and prints out all busy workqueues and pools.}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&show_freezable_workqueues (C function)c.show_freezable_workqueueshNtauh1jhjhhhNhNubj)}(hhh](j)}(h%void show_freezable_workqueues (void)h]j)}(h$void show_freezable_workqueues(void)h](j)}(hvoidh]hvoid}(hj[hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjWhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjWhhhjihMubj)}(hshow_freezable_workqueuesh]j)}(hshow_freezable_workqueuesh]hshow_freezable_workqueues}(hj|hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjxubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjWhhhjihMubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjWhhhjihMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjShhhjihMubah}(h]jNah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjihMhjPhhubjC)}(hhh]h)}(hdump freezable workqueue stateh]hdump freezable workqueue state}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjPhhhjihMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``void`` no arguments **Description** Called from try_to_freeze_tasks() and prints out all freezable workqueues still busy.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj>h]h Description}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubh)}(hUCalled from try_to_freeze_tasks() and prints out all freezable workqueues still busy.h]hUCalled from try_to_freeze_tasks() and prints out all freezable workqueues still busy.}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jrebind_workers (C function)c.rebind_workershNtauh1jhjhhhNhNubj)}(hhh](j)}(h.void rebind_workers (struct worker_pool *pool)h]j)}(h-void rebind_workers(struct worker_pool *pool)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMBubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMBubj)}(hrebind_workersh]j)}(hrebind_workersh]hrebind_workers}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMBubj|)}(h(struct worker_pool *pool)h]j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jsbc.rebind_workersasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMBubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj{hhhjhMBubah}(h]jvah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMBhjxhhubjC)}(hhh]h)}(h2rebind all workers of a pool to the associated CPUh]h2rebind all workers of a pool to the associated CPU}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMBhj@hhubah}(h]h ]h"]h$]h&]uh1jBhjxhhhjhMBubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej[jfj[jgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``struct worker_pool *pool`` pool of interest **Description** **pool->cpu** is coming online. Rebind all workers to the CPU.h](h)}(h**Parameters**h]j)}(hjeh]h Parameters}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMFhj_ubjg)}(hhh]jl)}(h.``struct worker_pool *pool`` pool of interest h](jr)}(h``struct worker_pool *pool``h]j)}(hjh]hstruct worker_pool *pool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMChj~ubj)}(hhh]h)}(hpool of interesth]hpool of interest}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMChjubah}(h]h ]h"]h$]h&]uh1jhj~ubeh}(h]h ]h"]h$]h&]uh1jkhjhMChj{ubah}(h]h ]h"]h$]h&]uh1jfhj_ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMEhj_ubh)}(h?**pool->cpu** is coming online. Rebind all workers to the CPU.h](j)}(h **pool->cpu**h]h pool->cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh2 is coming online. Rebind all workers to the CPU.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMDhj_ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j,restore_unbound_workers_cpumask (C function)!c.restore_unbound_workers_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(hHvoid restore_unbound_workers_cpumask (struct worker_pool *pool, int cpu)h]j)}(hGvoid restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMzubj)}(h h]h }(hj!hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhj hMzubj)}(hrestore_unbound_workers_cpumaskh]j)}(hrestore_unbound_workers_cpumaskh]hrestore_unbound_workers_cpumask}(hj3hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj/ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhj hMzubj|)}(h#(struct worker_pool *pool, int cpu)h](j)}(hstruct worker_pool *poolh](j)}(hjh]hstruct}(hjOhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjKubj)}(h h]h }(hj\hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjKubh)}(hhh]j)}(h worker_poolh]h worker_pool}(hjmhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjomodnameN classnameNj7j:)}j=]j@)}j3j5sb!c.restore_unbound_workers_cpumaskasbuh1hhjKubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjKubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjKubj)}(hpoolh]hpool}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjKubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubj)}(hint cpuh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hcpuh]hcpu}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjGubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhj hMzubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj hhhj hMzubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj hMzhjhhubjC)}(hhh]h)}(h"restore cpumask of unbound workersh]h"restore cpumask of unbound workers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMzhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhj hMzubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct worker_pool *pool`` unbound pool of interest ``int cpu`` the CPU which is coming up **Description** An unbound pool may end up with a cpumask which doesn't have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If **cpu** is in **pool**'s cpumask which didn't have any online CPU before, cpus_allowed of all its workers should be restored.h](h)}(h**Parameters**h]j)}(hj)h]h Parameters}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj'ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM~hj#ubjg)}(hhh](jl)}(h6``struct worker_pool *pool`` unbound pool of interest h](jr)}(h``struct worker_pool *pool``h]j)}(hjHh]hstruct worker_pool *pool}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM{hjBubj)}(hhh]h)}(hunbound pool of interesth]hunbound pool of interest}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhj]hM{hj^ubah}(h]h ]h"]h$]h&]uh1jhjBubeh}(h]h ]h"]h$]h&]uh1jkhj]hM{hj?ubjl)}(h'``int cpu`` the CPU which is coming up h](jr)}(h ``int cpu``h]j)}(hjh]hint cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM|hj{ubj)}(hhh]h)}(hthe CPU which is coming uph]hthe CPU which is coming up}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM|hjubah}(h]h ]h"]h$]h&]uh1jhj{ubeh}(h]h ]h"]h$]h&]uh1jkhjhM|hj?ubeh}(h]h ]h"]h$]h&]uh1jfhj#ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM~hj#ubh)}(hX!An unbound pool may end up with a cpumask which doesn't have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If **cpu** is in **pool**'s cpumask which didn't have any online CPU before, cpus_allowed of all its workers should be restored.h](hAn unbound pool may end up with a cpumask which doesn’t have any online CPUs. When a worker of such pool get scheduled, the scheduler resets its cpus_allowed. If }(hjhhhNhNubj)}(h**cpu**h]hcpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is in }(hjhhhNhNubj)}(h**pool**h]hpool}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhk’s cpumask which didn’t have any online CPU before, cpus_allowed of all its workers should be restored.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM}hj#ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jwork_on_cpu_key (C function)c.work_on_cpu_keyhNtauh1jhjhhhNhNubj)}(hhh](j)}(hYlong work_on_cpu_key (int cpu, long (*fn)(void *), void *arg, struct lock_class_key *key)h]j)}(hWlong work_on_cpu_key(int cpu, long (*fn)(void*), void *arg, struct lock_class_key *key)h](j)}(hlongh]hlong}(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj!hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj4hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj!hhhj3hMubj)}(hwork_on_cpu_keyh]j)}(hwork_on_cpu_keyh]hwork_on_cpu_key}(hjFhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjBubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj!hhhj3hMubj|)}(hC(int cpu, long (*fn)(void*), void *arg, struct lock_class_key *key)h](j)}(hint cpuh](j)}(hinth]hint}(hjbhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^ubj)}(h h]h }(hjphhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj^ubj)}(hcpuh]hcpu}(hj~hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj^ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjZubj)}(hlong (*fn)(void*)h](j)}(hlongh]hlong}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(h(h]h(}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubjU)}(hjuh]h*7}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hfnh]hfn}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubjU)}(h)h]h)}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubjU)}(hjh]h(}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubjU)}(hjh]h)}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjZubj)}(h void *argh](j)}(hvoidh]hvoid}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&ubj)}(h h]h }(hj8hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj&ubjU)}(hjuh]h*}(hjFhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj&ubj)}(hargh]harg}(hjShhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj&ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjZubj)}(hstruct lock_class_key *keyh](j)}(hjh]hstruct}(hjlhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhubj)}(h h]h }(hjyhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhubh)}(hhh]j)}(hlock_class_keyh]hlock_class_key}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jHsbc.work_on_cpu_keyasbuh1hhjhubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjhubj)}(hkeyh]hkey}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjhubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjZubeh}(h]h ]h"]h$]h&]jjuh1j{hj!hhhj3hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhj3hMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj3hMhjhhubjC)}(hhh]h)}(h4run a function in thread context on a particular cpuh]h4run a function in thread context on a particular cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhj3hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``int cpu`` the cpu to run on ``long (*fn)(void *)`` the function to run ``void *arg`` the function arg ``struct lock_class_key *key`` The lock class key for lock debugging purposes **Description** It is up to the caller to ensure that the cpu doesn't go offline. The caller must not hold any locks which would prevent **fn** from completing. **Return** The value **fn** returns.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubjg)}(hhh](jl)}(h``int cpu`` the cpu to run on h](jr)}(h ``int cpu``h]j)}(hj0h]hint cpu}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj*ubj)}(hhh]h)}(hthe cpu to run onh]hthe cpu to run on}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjEhMhjFubah}(h]h ]h"]h$]h&]uh1jhj*ubeh}(h]h ]h"]h$]h&]uh1jkhjEhMhj'ubjl)}(h+``long (*fn)(void *)`` the function to run h](jr)}(h``long (*fn)(void *)``h]j)}(hjih]hlong (*fn)(void *)}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjgubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjcubj)}(hhh]h)}(hthe function to runh]hthe function to run}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~hMhjubah}(h]h ]h"]h$]h&]uh1jhjcubeh}(h]h ]h"]h$]h&]uh1jkhj~hMhj'ubjl)}(h``void *arg`` the function arg h](jr)}(h ``void *arg``h]j)}(hjh]h void *arg}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(hthe function argh]hthe function arg}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhj'ubjl)}(hN``struct lock_class_key *key`` The lock class key for lock debugging purposes h](jr)}(h``struct lock_class_key *key``h]j)}(hjh]hstruct lock_class_key *key}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjubj)}(hhh]h)}(h.The lock class key for lock debugging purposesh]h.The lock class key for lock debugging purposes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhMhj'ubeh}(h]h ]h"]h$]h&]uh1jfhj ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(hIt is up to the caller to ensure that the cpu doesn't go offline. The caller must not hold any locks which would prevent **fn** from completing.h](h{It is up to the caller to ensure that the cpu doesn’t go offline. The caller must not hold any locks which would prevent }(hj,hhhNhNubj)}(h**fn**h]hfn}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj,ubh from completing.}(hj,hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(h **Return**h]j)}(hjOh]hReturn}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjMubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(hThe value **fn** returns.h](h The value }(hjehhhNhNubj)}(h**fn**h]hfn}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjeubh returns.}(hjehhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$freeze_workqueues_begin (C function)c.freeze_workqueues_beginhNtauh1jhjhhhNhNubj)}(hhh](j)}(h#void freeze_workqueues_begin (void)h]j)}(h"void freeze_workqueues_begin(void)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubj)}(hfreeze_workqueues_beginh]j)}(hfreeze_workqueues_beginh]hfreeze_workqueues_begin}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjhhubjC)}(hhh]h)}(hbegin freezing workqueuesh]hbegin freezing workqueues}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej%jfj%jgjhjiuh1jhhhjhNhNubjk)}(hX$**Parameters** ``void`` no arguments **Description** Start freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist. **Context** Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h](h)}(h**Parameters**h]j)}(hj/h]h Parameters}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj)ubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hjNh]hvoid}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjLubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjHubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhjchKhjdubah}(h]h ]h"]h$]h&]uh1jhjHubeh}(h]h ]h"]h$]h&]uh1jkhjchKhjEubah}(h]h ]h"]h$]h&]uh1jfhj)ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhj)ubh)}(hStart freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist.h]hStart freezing workqueues. After this function returns, all freezable workqueues will queue new works to their inactive_works list instead of pool->worklist.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj)ubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM!hj)ubh)}(h=Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h]h?Grabs and releases wq_pool_mutex, wq->mutex and pool->lock’s.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM"hj)ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j#freeze_workqueues_busy (C function)c.freeze_workqueues_busyhNtauh1jhjhhhNhNubj)}(hhh](j)}(h"bool freeze_workqueues_busy (void)h]j)}(h!bool freeze_workqueues_busy(void)h](j)}(hj*h]hbool}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM8ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhM8ubj)}(hfreeze_workqueues_busyh]j)}(hfreeze_workqueues_busyh]hfreeze_workqueues_busy}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhM8ubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hj1hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&]noemphjjuh1jhj)ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM8ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM8ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM8hjhhubjC)}(hhh]h)}(h$are freezable workqueues still busy?h]h$are freezable workqueues still busy?}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM8hjXhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM8ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejsjfjsjgjhjiuh1jhhhjhNhNubjk)}(hXK**Parameters** ``void`` no arguments **Description** Check whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues(). **Context** Grabs and releases wq_pool_mutex. **Return** ``true`` if some freezable workqueues are still busy. ``false`` if freezing is complete.h](h)}(h**Parameters**h]j)}(hj}h]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM<hjwubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubah}(h]h ]h"]h$]h&]uh1jfhjwubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjwubh)}(hzCheck whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues().h]hzCheck whether freezing is complete. This function must be called between freeze_workqueues_begin() and thaw_workqueues().}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM9hjwubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM<hjwubh)}(h!Grabs and releases wq_pool_mutex.h]h!Grabs and releases wq_pool_mutex.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM=hjwubh)}(h **Return**h]j)}(hj%h]hReturn}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM?hjwubh)}(hY``true`` if some freezable workqueues are still busy. ``false`` if freezing is complete.h](j)}(h``true``h]htrue}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubh/ if some freezable workqueues are still busy. }(hj;hhhNhNubj)}(h ``false``h]hfalse}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;ubh if freezing is complete.}(hj;hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM@hjwubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jthaw_workqueues (C function)c.thaw_workqueueshNtauh1jhjhhhNhNubj)}(hhh](j)}(hvoid thaw_workqueues (void)h]j)}(hvoid thaw_workqueues(void)h](j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMfubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMfubj)}(hthaw_workqueuesh]j)}(hthaw_workqueuesh]hthaw_workqueues}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMfubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMfubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMfubah}(h]j}ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMfhjhhubjC)}(hhh]h)}(hthaw workqueuesh]hthaw workqueues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMfhjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMfubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej jfj jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``void`` no arguments **Description** Thaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists. **Context** Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMjhj ubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hj2h]hvoid}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhj,ubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjGhKhjHubah}(h]h ]h"]h$]h&]uh1jhj,ubeh}(h]h ]h"]h$]h&]uh1jkhjGhKhj)ubah}(h]h ]h"]h$]h&]uh1jfhj ubh)}(h**Description**h]j)}(hjmh]h Description}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhj ubh)}(hThaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists.h]hThaw workqueues. Normal queueing is restored and all collected frozen works are transferred to their respective pool worklists.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMghj ubh)}(h **Context**h]j)}(hjh]hContext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMjhj ubh)}(h=Grabs and releases wq_pool_mutex, wq->mutex and pool->lock's.h]h?Grabs and releases wq_pool_mutex, wq->mutex and pool->lock’s.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMkhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j2workqueue_unbound_housekeeping_update (C function)'c.workqueue_unbound_housekeeping_updatehNtauh1jhjhhhNhNubj)}(hhh](j)}(hDint workqueue_unbound_housekeeping_update (const struct cpumask *hk)h]j)}(hCint workqueue_unbound_housekeeping_update(const struct cpumask *hk)h](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjhhhjhMubj)}(h%workqueue_unbound_housekeeping_updateh]j)}(h%workqueue_unbound_housekeeping_updateh]h%workqueue_unbound_housekeeping_update}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjhhhjhMubj|)}(h(const struct cpumask *hk)h]j)}(hconst struct cpumask *hkh](j)}(hjh]hconst}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(h h]h }(hj# hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubj)}(hjh]hstruct}(hj1 hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(h h]h }(hj> hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubh)}(hhh]j)}(hcpumaskh]hcpumask}(hjO hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjL ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjQ modnameN classnameNj7j:)}j=]j@)}j3jsb'c.workqueue_unbound_housekeeping_updateasbuh1hhj ubj)}(h h]h }(hjo hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubjU)}(hjuh]h*}(hj} hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj ubj)}(hhkh]hhk}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubah}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhMubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhMhjhhubjC)}(hhh]h)}(h%Propagate housekeeping cpumask updateh]h%Propagate housekeeping cpumask update}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej jfj jgjhjiuh1jhhhjhNhNubjk)}(hXr**Parameters** ``const struct cpumask *hk`` the new housekeeping cpumask **Description** Update the unbound workqueue cpumask on top of the new housekeeping cpumask such that the effective unbound affinity is the intersection of the new housekeeping with the requested affinity set via nohz_full=/isolcpus= or sysfs. **Return** 0 on success and -errno on failure.h](h)}(h**Parameters**h]j)}(hj h]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubjg)}(hhh]jl)}(h:``const struct cpumask *hk`` the new housekeeping cpumask h](jr)}(h``const struct cpumask *hk``h]j)}(hj h]hconst struct cpumask *hk}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubj)}(hhh]h)}(hthe new housekeeping cpumaskh]hthe new housekeeping cpumask}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jkhj hMhj ubah}(h]h ]h"]h$]h&]uh1jfhj ubh)}(h**Description**h]j)}(hj0 h]h Description}(hj2 hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj. ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(hUpdate the unbound workqueue cpumask on top of the new housekeeping cpumask such that the effective unbound affinity is the intersection of the new housekeeping with the requested affinity set via nohz_full=/isolcpus= or sysfs.h]hUpdate the unbound workqueue cpumask on top of the new housekeeping cpumask such that the effective unbound affinity is the intersection of the new housekeeping with the requested affinity set via nohz_full=/isolcpus= or sysfs.}(hjF hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(h **Return**h]j)}(hjW h]hReturn}(hjY hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjU ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubh)}(h#0 on success and -errno on failure.h]h#0 on success and -errno on failure.}(hjm hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j*workqueue_set_unbound_cpumask (C function)c.workqueue_set_unbound_cpumaskhNtauh1jhjhhhNhNubj)}(hhh](j)}(h9int workqueue_set_unbound_cpumask (cpumask_var_t cpumask)h]j)}(h8int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)h](j)}(hinth]hint}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj hhhj hMubj)}(hworkqueue_set_unbound_cpumaskh]j)}(hworkqueue_set_unbound_cpumaskh]hworkqueue_set_unbound_cpumask}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj hhhj hMubj|)}(h(cpumask_var_t cpumask)h]j)}(hcpumask_var_t cpumaskh](h)}(hhh]j)}(h cpumask_var_th]h cpumask_var_t}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj modnameN classnameNj7j:)}j=]j@)}j3j sbc.workqueue_set_unbound_cpumaskasbuh1hhj ubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubj)}(hcpumaskh]hcpumask}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubah}(h]h ]h"]h$]h&]jjuh1j{hj hhhj hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hj hhhj hMubah}(h]j ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj hMhj hhubjC)}(hhh]h)}(h!Set the low-level unbound cpumaskh]h!Set the low-level unbound cpumask}(hj4 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj1 hhubah}(h]h ]h"]h$]h&]uh1jBhj hhhj hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejL jfjL jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``cpumask_var_t cpumask`` the cpumask to set **Description** The low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them. **Return** 0 - Success -EINVAL - Invalid **cpumask** -ENOMEM - Failed to allocate memory for attrs or pwqs.h](h)}(h**Parameters**h]j)}(hjV h]h Parameters}(hjX hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjT ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjP ubjg)}(hhh]jl)}(h-``cpumask_var_t cpumask`` the cpumask to set h](jr)}(h``cpumask_var_t cpumask``h]j)}(hju h]hcpumask_var_t cpumask}(hjw hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjs ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjo ubj)}(hhh]h)}(hthe cpumask to seth]hthe cpumask to set}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhj ubah}(h]h ]h"]h$]h&]uh1jhjo ubeh}(h]h ]h"]h$]h&]uh1jkhj hMhjl ubah}(h]h ]h"]h$]h&]uh1jfhjP ubh)}(h**Description**h]j)}(hj h]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjP ubj)}(hThe low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them. h]h)}(hThe low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the **cpumask** and apply it to all unbound workqueues and updates all pwqs of them.h](hThe low-level workqueues cpumask is a global cpumask that limits the affinity of all unbound workqueues. This function check the }(hj hhhNhNubj)}(h **cpumask**h]hcpumask}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhE and apply it to all unbound workqueues and updates all pwqs of them.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj ubah}(h]h ]h"]h$]h&]uh1jhj hMhjP ubh)}(h **Return**h]j)}(hj h]hReturn}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjP ubh)}(hf0 - Success -EINVAL - Invalid **cpumask** -ENOMEM - Failed to allocate memory for attrs or pwqs.h](h$0 - Success -EINVAL - Invalid }(hj hhhNhNubj)}(h **cpumask**h]hcpumask}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh7 -ENOMEM - Failed to allocate memory for attrs or pwqs.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjP ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%workqueue_sysfs_register (C function)c.workqueue_sysfs_registerhNtauh1jhjhhhNhNubj)}(hhh](j)}(h:int workqueue_sysfs_register (struct workqueue_struct *wq)h]j)}(h9int workqueue_sysfs_register(struct workqueue_struct *wq)h](j)}(hinth]hint}(hjJ hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjF hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMubj)}(h h]h }(hjY hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjF hhhjX hMubj)}(hworkqueue_sysfs_registerh]j)}(hworkqueue_sysfs_registerh]hworkqueue_sysfs_register}(hjk hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjg ubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjF hhhjX hMubj|)}(h(struct workqueue_struct *wq)h]j)}(hstruct workqueue_struct *wqh](j)}(hjh]hstruct}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubh)}(hhh]j)}(hworkqueue_structh]hworkqueue_struct}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj modnameN classnameNj7j:)}j=]j@)}j3jm sbc.workqueue_sysfs_registerasbuh1hhj ubj)}(h h]h }(hj hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj ubjU)}(hjuh]h*}(hj hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj ubj)}(hwqh]hwq}(hj hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj ubah}(h]h ]h"]h$]h&]jjuh1j{hjF hhhjX hMubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjB hhhjX hMubah}(h]j= ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjX hMhj? hhubjC)}(hhh]h)}(h!make a workqueue visible in sysfsh]h!make a workqueue visible in sysfs}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj hhubah}(h]h ]h"]h$]h&]uh1jBhj? hhhjX hMubeh}(h]h ](j_functioneh"]h$]h&]jdj_jej" jfj" jgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``struct workqueue_struct *wq`` the workqueue to register **Description** Expose **wq** in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method. Workqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes. **Return** 0 on success, -errno on failure.h](h)}(h**Parameters**h]j)}(hj, h]h Parameters}(hj. hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj* ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj& ubjg)}(hhh]jl)}(h:``struct workqueue_struct *wq`` the workqueue to register h](jr)}(h``struct workqueue_struct *wq``h]j)}(hjK h]hstruct workqueue_struct *wq}(hjM hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjI ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhjE ubj)}(hhh]h)}(hthe workqueue to registerh]hthe workqueue to register}(hjd hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj` hMhja ubah}(h]h ]h"]h$]h&]uh1jhjE ubeh}(h]h ]h"]h$]h&]uh1jkhj` hMhjB ubah}(h]h ]h"]h$]h&]uh1jfhj& ubh)}(h**Description**h]j)}(hj h]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj& ubh)}(hExpose **wq** in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method.h](hExpose }(hj hhhNhNubj)}(h**wq**h]hwq}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh in sysfs under /sys/bus/workqueue/devices. alloc_workqueue*() automatically calls this function if WQ_SYSFS is set which is the preferred method.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj& ubh)}(hWorkqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes.h]hWorkqueue user should use this function directly iff it wants to apply workqueue_attrs before making the workqueue visible in sysfs; otherwise, apply_workqueue_attrs() may race against userland updating the attributes.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj& ubh)}(h **Return**h]j)}(hj h]hReturn}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj& ubh)}(h 0 on success, -errno on failure.h]h 0 on success, -errno on failure.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMhj& ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j'workqueue_sysfs_unregister (C function)c.workqueue_sysfs_unregisterhNtauh1jhjhhhNhNubj)}(hhh](j)}(h=void workqueue_sysfs_unregister (struct workqueue_struct *wq)h]j)}(h(const struct cpumask *pod_cpus, struct wq_pod_type *smt_pods)h](j)}(hconst struct cpumask *pod_cpush](j)}(hjh]hconst}(hj3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/ubj)}(h h]h }(hj@hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/ubj)}(hjh]hstruct}(hjNhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/ubj)}(h h]h }(hj[hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/ubh)}(hhh]j)}(hcpumaskh]hcpumask}(hjlhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjiubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjnmodnameN classnameNj7j:)}j=]j@)}j3jsbc.llc_count_coresasbuh1hhj/ubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj/ubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThj/ubj)}(hpod_cpush]hpod_cpus}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj/ubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj+ubj)}(hstruct wq_pod_type *smt_podsh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h wq_pod_typeh]h wq_pod_type}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]jc.llc_count_coresasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hj hhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hsmt_podsh]hsmt_pods}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhj+ubeh}(h]h ]h"]h$]h&]jjuh1j{hjhhhjhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhjhM ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjhM hjhhubjC)}(hhh]h)}(h3count distinct cores (SMT groups) within an LLC podh]h3count distinct cores (SMT groups) within an LLC pod}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj>hhubah}(h]h ]h"]h$]h&]uh1jBhjhhhjhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejYjfjYjgjhjiuh1jhhhjhNhNubjk)}(hX>**Parameters** ``const struct cpumask *pod_cpus`` the cpumask of CPUs in the LLC pod ``struct wq_pod_type *smt_pods`` the SMT pod type, used to identify sibling groups **Description** A core is represented by the lowest-numbered CPU in its SMT group. Returns the number of distinct cores found in **pod_cpus**.h](h)}(h**Parameters**h]j)}(hjch]h Parameters}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1jhjaubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj]ubjg)}(hhh](jl)}(hF``const struct cpumask *pod_cpus`` the cpumask of CPUs in the LLC pod h](jr)}(h"``const struct cpumask *pod_cpus``h]j)}(hjh]hconst struct cpumask *pod_cpus}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj|ubj)}(hhh]h)}(h"the cpumask of CPUs in the LLC podh]h"the cpumask of CPUs in the LLC pod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhj|ubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjyubjl)}(hS``struct wq_pod_type *smt_pods`` the SMT pod type, used to identify sibling groups h](jr)}(h ``struct wq_pod_type *smt_pods``h]j)}(hjh]hstruct wq_pod_type *smt_pods}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubj)}(hhh]h)}(h1the SMT pod type, used to identify sibling groupsh]h1the SMT pod type, used to identify sibling groups}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhM hjyubeh}(h]h ]h"]h$]h&]uh1jfhj]ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj]ubh)}(h~A core is represented by the lowest-numbered CPU in its SMT group. Returns the number of distinct cores found in **pod_cpus**.h](hqA core is represented by the lowest-numbered CPU in its SMT group. Returns the number of distinct cores found in }(hj hhhNhNubj)}(h **pod_cpus**h]hpod_cpus}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hj]ubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&llc_populate_cpu_shard_id (C function)c.llc_populate_cpu_shard_idhNtauh1jhjhhhNhNubj)}(hhh](j)}(hkvoid llc_populate_cpu_shard_id (const struct cpumask *pod_cpus, struct wq_pod_type *smt_pods, int nr_cores)h]j)}(hjvoid llc_populate_cpu_shard_id(const struct cpumask *pod_cpus, struct wq_pod_type *smt_pods, int nr_cores)h](j)}(hvoidh]hvoid}(hjMhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMh ubj)}(h h]h }(hj\hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjIhhhj[hMh ubj)}(hllc_populate_cpu_shard_idh]j)}(hllc_populate_cpu_shard_idh]hllc_populate_cpu_shard_id}(hjnhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjjubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjIhhhj[hMh ubj|)}(hL(const struct cpumask *pod_cpus, struct wq_pod_type *smt_pods, int nr_cores)h](j)}(hconst struct cpumask *pod_cpush](j)}(hjh]hconst}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(hcpumaskh]hcpumask}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetjmodnameN classnameNj7j:)}j=]j@)}j3jpsbc.llc_populate_cpu_shard_idasbuh1hhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjhhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hpod_cpush]hpod_cpus}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(hstruct wq_pod_type *smt_podsh](j)}(hjh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hj$hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubh)}(hhh]j)}(h wq_pod_typeh]h wq_pod_type}(hj5hhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhj2ubah}(h]h ]h"]h$]h&] refdomainj_reftypej3 reftargetj7modnameN classnameNj7j:)}j=]jc.llc_populate_cpu_shard_idasbuh1hhjubj)}(h h]h }(hjShhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubjU)}(hjuh]h*}(hjahhhNhNubah}(h]h ]j`ah"]h$]h&]uh1jThjubj)}(hsmt_podsh]hsmt_pods}(hjnhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubj)}(h int nr_coresh](j)}(hinth]hint}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubj)}(h h]h }(hjhhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjubj)}(hnr_coresh]hnr_cores}(hjhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]noemphjjuh1jhjubeh}(h]h ]h"]h$]h&]jjuh1j{hjIhhhj[hMh ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjEhhhj[hMh ubah}(h]j@ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj[hMh hjBhhubjC)}(hhh]h)}(h2populate cpu_shard_id[] for each CPU in an LLC podh]h2populate cpu_shard_id[] for each CPU in an LLC pod}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMh hjhhubah}(h]h ]h"]h$]h&]uh1jBhjBhhhj[hMh ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(hX**Parameters** ``const struct cpumask *pod_cpus`` the cpumask of CPUs in the LLC pod ``struct wq_pod_type *smt_pods`` the SMT pod type, used to identify sibling groups ``int nr_cores`` number of distinct cores in **pod_cpus** (from llc_count_cores()) **Description** Walks **pod_cpus** in order. At each SMT group leader, advances to the next shard once the current shard is full. Results are written to cpu_shard_id[].h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMl hjubjg)}(hhh](jl)}(hF``const struct cpumask *pod_cpus`` the cpumask of CPUs in the LLC pod h](jr)}(h"``const struct cpumask *pod_cpus``h]j)}(hjh]hconst struct cpumask *pod_cpus}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.c9hMi hjubj)}(hhh]h)}(h"the cpumask of CPUs in the LLC podh]h"the cpumask of CPUs in the LLC pod}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj#hMi hj$ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj#hMi hjubjl)}(hS``struct wq_pod_type *smt_pods`` the SMT pod type, used to identify sibling groups h](jr)}(h ``struct wq_pod_type *smt_pods``h]j)}(hjGh]hstruct wq_pod_type *smt_pods}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMj hjAubj)}(hhh]h)}(h1the SMT pod type, used to identify sibling groupsh]h1the SMT pod type, used to identify sibling groups}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj\hMj hj]ubah}(h]h ]h"]h$]h&]uh1jhjAubeh}(h]h ]h"]h$]h&]uh1jkhj\hMj hjubjl)}(hS``int nr_cores`` number of distinct cores in **pod_cpus** (from llc_count_cores()) h](jr)}(h``int nr_cores``h]j)}(hjh]h int nr_cores}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj~ubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMk hjzubj)}(hhh]h)}(hAnumber of distinct cores in **pod_cpus** (from llc_count_cores())h](hnumber of distinct cores in }(hjhhhNhNubj)}(h **pod_cpus**h]hpod_cpus}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh (from llc_count_cores())}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhMk hjubah}(h]h ]h"]h$]h&]uh1jhjzubeh}(h]h ]h"]h$]h&]uh1jkhjhMk hjubeh}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMm hjubh)}(hWalks **pod_cpus** in order. At each SMT group leader, advances to the next shard once the current shard is full. Results are written to cpu_shard_id[].h](hWalks }(hjhhhNhNubj)}(h **pod_cpus**h]hpod_cpus}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh in order. At each SMT group leader, advances to the next shard once the current shard is full. Results are written to cpu_shard_id[].}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chMl hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j'precompute_cache_shard_ids (C function)c.precompute_cache_shard_idshNtauh1jhjhhhNhNubj)}(hhh](j)}(h&void precompute_cache_shard_ids (void)h]j)}(h%void precompute_cache_shard_ids(void)h](j)}(hvoidh]hvoid}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hj3hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhj hhhj2hM ubj)}(hprecompute_cache_shard_idsh]j)}(hprecompute_cache_shard_idsh]hprecompute_cache_shard_ids}(hjEhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjAubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhj hhhj2hM ubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjahhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]ubah}(h]h ]h"]h$]h&]noemphjjuh1jhjYubah}(h]h ]h"]h$]h&]jjuh1j{hj hhhj2hM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjhhhj2hM ubah}(h]jah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhj2hM hjhhubjC)}(hhh]h)}(h.assign each CPU its shard index within its LLCh]h.assign each CPU its shard index within its LLC}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jBhjhhhj2hM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``void`` no arguments **Description** Iterates over all LLC pods. For each pod, counts distinct cores then assigns shard indices to all CPUs in the pod. Must be called after WQ_AFFN_CACHE and WQ_AFFN_SMT have been initialized.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubj)}(hhh]h)}(h no argumentsh]h no arguments}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhjhKhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hjh]h Description}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubh)}(hIterates over all LLC pods. For each pod, counts distinct cores then assigns shard indices to all CPUs in the pod. Must be called after WQ_AFFN_CACHE and WQ_AFFN_SMT have been initialized.h]hIterates over all LLC pods. For each pod, counts distinct cores then assigns shard indices to all CPUs in the pod. Must be called after WQ_AFFN_CACHE and WQ_AFFN_SMT have been initialized.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$workqueue_init_topology (C function)c.workqueue_init_topologyhNtauh1jhjhhhNhNubj)}(hhh](j)}(h#void workqueue_init_topology (void)h]j)}(h"void workqueue_init_topology(void)h](j)}(hvoidh]hvoid}(hjLhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjHhhhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM ubj)}(h h]h }(hj[hhhNhNubah}(h]h ]j ah"]h$]h&]uh1jhjHhhhjZhM ubj)}(hworkqueue_init_topologyh]j)}(hworkqueue_init_topologyh]hworkqueue_init_topology}(hjmhhhNhNubah}(h]h ]j"ah"]h$]h&]uh1jhjiubah}(h]h ](j)j*eh"]h$]h&]jjuh1jhjHhhhjZhM ubj|)}(h(void)h]j)}(hvoidh]j)}(hvoidh]hvoid}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]noemphjjuh1jhjubah}(h]h ]h"]h$]h&]jjuh1j{hjHhhhjZhM ubeh}(h]h ]h"]h$]h&]jjj4uh1jj5j6hjDhhhjZhM ubah}(h]j?ah ](j:j;eh"]h$]h&]j?j@)jAhuh1jhjZhM hjAhhubjC)}(hhh]h)}(h*initialize CPU pods for unbound workqueuesh]h*initialize CPU pods for unbound workqueues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjhhubah}(h]h ]h"]h$]h&]uh1jBhjAhhhjZhM ubeh}(h]h ](j_functioneh"]h$]h&]jdj_jejjfjjgjhjiuh1jhhhjhNhNubjk)}(h**Parameters** ``void`` no arguments **Description** This is the third step of three-staged workqueue subsystem initialization and invoked after SMP and topology information are fully initialized. It initializes the unbound CPU pods accordingly.h](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubjg)}(hhh]jl)}(h``void`` no arguments h](jr)}(h``void``h]j)}(hjh]hvoid}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jqhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubj)}(hhh]h)}(h no argumentsh]h no arguments}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jkhj hKhjubah}(h]h ]h"]h$]h&]uh1jfhjubh)}(h**Description**h]j)}(hj/h]h Description}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chKhjubh)}(hThis is the third step of three-staged workqueue subsystem initialization and invoked after SMP and topology information are fully initialized. It initializes the unbound CPU pods accordingly.h]hThis is the third step of three-staged workqueue subsystem initialization and invoked after SMP and topology information are fully initialized. It initializes the unbound CPU pods accordingly.}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhV/var/lib/git/docbuild/linux/Documentation/core-api/workqueue:795: ./kernel/workqueue.chM hjubeh}(h]h ] kernelindentah"]h$]h&]uh1jjhjhhhNhNubeh}(h]&kernel-inline-documentations-referenceah ]h"]&kernel inline documentations referenceah$]h&]uh1hhhhhhhhMubeh}(h] workqueueah ]h"] workqueueah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksj footnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjerror_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourcehʌ _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(jhjejjj{jxj~j{jjjjjjj~j{jn jk j j jjjjjjjjjjjjj-j*jjjjj`j]u nametypes}(jhjj{j~jjjj~jn j jjjjjjj-jjj`uh}(jehjjjxjj{j~jjjjLjjj{jjk jj jq jj jj jj jjjjjjj*jjj0jjj]jjjj)j.j jjjj=jBj.#j3#j'j'j )j)j*j*j-j-j0j0jA3jF3j/5j45j7j7j9j9j<j$<j)>j.>j{?j?j@j@jBj$BjCjCjxEj}EjGjGjIjIjKjKjMjMj(Oj-OjwRj|RjUjUjwYj|Yj[j [j\j\j^j ^j_j_jajajdjdjfjfjXhj]hjijijkjkjmjmjojojqjqjsjsjujujxj xjS|jX|jjjajfjQjVjjjjjejjjZj_jjj]jbjޘjjjjojtjjjjjyj~jjjjjjj jjjjjĮjBjGj#j(jjjIjNj4j9j jjjjjj"j'jjjj jjjjjijnjjjijnjjj%j*jjjDjIj4j9jjjjjjjyj~jjjjjHjMjjjjjxj}jj!jjj{jj8j=jpjujjj&j+jNjSjvj{jj jjjjjjj}jjjj j j= jB jj jjjjjjj@jEjjj?jDu footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.