sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget /translations/zh_CN/block/blk-mqmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/zh_TW/block/blk-mqmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/it_IT/block/blk-mqmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/ja_JP/block/blk-mqmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/ko_KR/block/blk-mqmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/sp_SP/block/blk-mqmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhcomment)}(h SPDX-License-Identifier: GPL-2.0h]h SPDX-License-Identifier: GPL-2.0}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhhh:/var/lib/git/docbuild/linux/Documentation/block/blk-mq.rsthKubhsection)}(hhh](htitle)}(h0Multi-Queue Block IO Queueing Mechanism (blk-mq)h]h0Multi-Queue Block IO Queueing Mechanism (blk-mq)}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh paragraph)}(hX*The Multi-Queue Block IO Queueing Mechanism is an API to enable fast storage devices to achieve a huge number of input/output operations per second (IOPS) through queueing and submitting IO requests to block devices simultaneously, benefiting from the parallelism offered by modern storage devices.h]hX*The Multi-Queue Block IO Queueing Mechanism is an API to enable fast storage devices to achieve a huge number of input/output operations per second (IOPS) through queueing and submitting IO requests to block devices simultaneously, benefiting from the parallelism offered by modern storage devices.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(h Introductionh]h Introduction}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(h Backgroundh]h Background}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hXMagnetic hard disks have been the de facto standard from the beginning of the development of the kernel. The Block IO subsystem aimed to achieve the best performance possible for those devices with a high penalty when doing random access, and the bottleneck was the mechanical moving parts, a lot slower than any layer on the storage stack. One example of such optimization technique involves ordering read/write requests according to the current position of the hard disk head.h]hXMagnetic hard disks have been the de facto standard from the beginning of the development of the kernel. The Block IO subsystem aimed to achieve the best performance possible for those devices with a high penalty when doing random access, and the bottleneck was the mechanical moving parts, a lot slower than any layer on the storage stack. One example of such optimization technique involves ordering read/write requests according to the current position of the hard disk head.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hXyHowever, with the development of Solid State Drives and Non-Volatile Memories without mechanical parts nor random access penalty and capable of performing high parallel access, the bottleneck of the stack had moved from the storage device to the operating system. In order to take advantage of the parallelism in those devices' design, the multi-queue mechanism was introduced.h]hX{However, with the development of Solid State Drives and Non-Volatile Memories without mechanical parts nor random access penalty and capable of performing high parallel access, the bottleneck of the stack had moved from the storage device to the operating system. In order to take advantage of the parallelism in those devices’ design, the multi-queue mechanism was introduced.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hXFThe former design had a single queue to store block IO requests with a single lock. That did not scale well in SMP systems due to dirty data in cache and the bottleneck of having a single lock for multiple processors. This setup also suffered with congestion when different processes (or the same process, moving to different CPUs) wanted to perform block IO. Instead of this, the blk-mq API spawns multiple queues with individual entry points local to the CPU, removing the need for a lock. A deeper explanation on how this works is covered in the following section (`Operation`_).h](hX8The former design had a single queue to store block IO requests with a single lock. That did not scale well in SMP systems due to dirty data in cache and the bottleneck of having a single lock for multiple processors. This setup also suffered with congestion when different processes (or the same process, moving to different CPUs) wanted to perform block IO. Instead of this, the blk-mq API spawns multiple queues with individual entry points local to the CPU, removing the need for a lock. A deeper explanation on how this works is covered in the following section (}(hjhhhNhNubh reference)}(h `Operation`_h]h Operation}(hj!hhhNhNubah}(h]h ]h"]h$]h&]name Operationrefid operationuh1jhjresolvedKubh).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK hhhhubeh}(h] backgroundah ]h"] backgroundah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Operationh]h Operation}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhhhhhK*ubh)}(hXWhen the userspace performs IO to a block device (reading or writing a file, for instance), blk-mq takes action: it will store and manage IO requests to the block device, acting as middleware between the userspace (and a file system, if present) and the block device driver.h]hXWhen the userspace performs IO to a block device (reading or writing a file, for instance), blk-mq takes action: it will store and manage IO requests to the block device, acting as middleware between the userspace (and a file system, if present) and the block device driver.}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK,hjFhhubh)}(hXblk-mq has two group of queues: software staging queues and hardware dispatch queues. When the request arrives at the block layer, it will try the shortest path possible: send it directly to the hardware queue. However, there are two cases that it might not do that: if there's an IO scheduler attached at the layer or if we want to try to merge requests. In both cases, requests will be sent to the software queue.h]hXblk-mq has two group of queues: software staging queues and hardware dispatch queues. When the request arrives at the block layer, it will try the shortest path possible: send it directly to the hardware queue. However, there are two cases that it might not do that: if there’s an IO scheduler attached at the layer or if we want to try to merge requests. In both cases, requests will be sent to the software queue.}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK1hjFhhubh)}(hXqThen, after the requests are processed by software queues, they will be placed at the hardware queue, a second stage queue where the hardware has direct access to process those requests. However, if the hardware does not have enough resources to accept more requests, blk-mq will place requests on a temporary queue, to be sent in the future, when the hardware is able.h]hXqThen, after the requests are processed by software queues, they will be placed at the hardware queue, a second stage queue where the hardware has direct access to process those requests. However, if the hardware does not have enough resources to accept more requests, blk-mq will place requests on a temporary queue, to be sent in the future, when the hardware is able.}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK8hjFhhubh)}(hhh](h)}(hSoftware staging queuesh]hSoftware staging queues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK?ubh)}(hXThe block IO subsystem adds requests in the software staging queues (represented by struct blk_mq_ctx) in case that they weren't sent directly to the driver. A request is one or more BIOs. They arrived at the block layer through the data structure struct bio. The block layer will then build a new structure from it, the struct request that will be used to communicate with the device driver. Each queue has its own lock and the number of queues is defined by a per-CPU or per-node basis.h]hXThe block IO subsystem adds requests in the software staging queues (represented by struct blk_mq_ctx) in case that they weren’t sent directly to the driver. A request is one or more BIOs. They arrived at the block layer through the data structure struct bio. The block layer will then build a new structure from it, the struct request that will be used to communicate with the device driver. Each queue has its own lock and the number of queues is defined by a per-CPU or per-node basis.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKAhjhhubh)}(hXThe staging queue can be used to merge requests for adjacent sectors. For instance, requests for sector 3-6, 6-7, 7-9 can become one request for 3-9. Even if random access to SSDs and NVMs have the same time of response compared to sequential access, grouped requests for sequential access decreases the number of individual requests. This technique of merging requests is called plugging.h]hXThe staging queue can be used to merge requests for adjacent sectors. For instance, requests for sector 3-6, 6-7, 7-9 can become one request for 3-9. Even if random access to SSDs and NVMs have the same time of response compared to sequential access, grouped requests for sequential access decreases the number of individual requests. This technique of merging requests is called plugging.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKIhjhhubh)}(hAlong with that, the requests can be reordered to ensure fairness of system resources (e.g. to ensure that no application suffers from starvation) and/or to improve IO performance, by an IO scheduler.h]hAlong with that, the requests can be reordered to ensure fairness of system resources (e.g. to ensure that no application suffers from starvation) and/or to improve IO performance, by an IO scheduler.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKPhjhhubh)}(hhh](h)}(h IO Schedulersh]h IO Schedulers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKUubh)}(hX There are several schedulers implemented by the block layer, each one following a heuristic to improve the IO performance. They are "pluggable" (as in plug and play), in the sense of they can be selected at run time using sysfs. You can read more about Linux's IO schedulers `here `_. The scheduling happens only between requests in the same queue, so it is not possible to merge requests from different queues, otherwise there would be cache trashing and a need to have a lock for each queue. After the scheduling, the requests are eligible to be sent to the hardware. One of the possible schedulers to be selected is the NONE scheduler, the most straightforward one. It will just place requests on whatever software queue the process is running on, without any reordering. When the device starts processing requests in the hardware queue (a.k.a. run the hardware queue), the software queues mapped to that hardware queue will be drained in sequence according to their mapping.h](hXThere are several schedulers implemented by the block layer, each one following a heuristic to improve the IO performance. They are “pluggable” (as in plug and play), in the sense of they can be selected at run time using sysfs. You can read more about Linux’s IO schedulers }(hjhhhNhNubj )}(hA`here `_h]hhere}(hjhhhNhNubah}(h]h ]h"]h$]h&]namehererefuri7https://www.kernel.org/doc/html/latest/block/index.htmluh1jhjubhtarget)}(h: h]h}(h]hereah ]h"]hereah$]h&]refurijuh1j referencedKhjubhX. The scheduling happens only between requests in the same queue, so it is not possible to merge requests from different queues, otherwise there would be cache trashing and a need to have a lock for each queue. After the scheduling, the requests are eligible to be sent to the hardware. One of the possible schedulers to be selected is the NONE scheduler, the most straightforward one. It will just place requests on whatever software queue the process is running on, without any reordering. When the device starts processing requests in the hardware queue (a.k.a. run the hardware queue), the software queues mapped to that hardware queue will be drained in sequence according to their mapping.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKWhjhhubeh}(h] io-schedulersah ]h"] io schedulersah$]h&]uh1hhjhhhhhKUubeh}(h]software-staging-queuesah ]h"]software staging queuesah$]h&]uh1hhjFhhhhhK?ubh)}(hhh](h)}(hHardware dispatch queuesh]hHardware dispatch queues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKgubh)}(hXThe hardware queue (represented by struct blk_mq_hw_ctx) is a struct used by device drivers to map the device submission queues (or device DMA ring buffer), and are the last step of the block layer submission code before the low level device driver taking ownership of the request. To run this queue, the block layer removes requests from the associated software queues and tries to dispatch to the hardware.h]hXThe hardware queue (represented by struct blk_mq_hw_ctx) is a struct used by device drivers to map the device submission queues (or device DMA ring buffer), and are the last step of the block layer submission code before the low level device driver taking ownership of the request. To run this queue, the block layer removes requests from the associated software queues and tries to dispatch to the hardware.}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKihjhhubh)}(hXsIf it's not possible to send the requests directly to hardware, they will be added to a linked list (``hctx->dispatch``) of requests. Then, next time the block layer runs a queue, it will send the requests laying at the ``dispatch`` list first, to ensure a fairness dispatch with those requests that were ready to be sent first. The number of hardware queues depends on the number of hardware contexts supported by the hardware and its device driver, but it will not be more than the number of cores of the system. There is no reordering at this stage, and each software queue has a set of hardware queues to send requests for.h](hgIf it’s not possible to send the requests directly to hardware, they will be added to a linked list (}(hj0hhhNhNubhliteral)}(h``hctx->dispatch``h]hhctx->dispatch}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj0ubhe) of requests. Then, next time the block layer runs a queue, it will send the requests laying at the }(hj0hhhNhNubj9)}(h ``dispatch``h]hdispatch}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj0ubhX list first, to ensure a fairness dispatch with those requests that were ready to be sent first. The number of hardware queues depends on the number of hardware contexts supported by the hardware and its device driver, but it will not be more than the number of cores of the system. There is no reordering at this stage, and each software queue has a set of hardware queues to send requests for.}(hj0hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKphjhhubhnote)}(hNeither the block layer nor the device protocols guarantee the order of completion of requests. This must be handled by higher layers, like the filesystem.h]h)}(hNeither the block layer nor the device protocols guarantee the order of completion of requests. This must be handled by higher layers, like the filesystem.h]hNeither the block layer nor the device protocols guarantee the order of completion of requests. This must be handled by higher layers, like the filesystem.}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK|hjfubah}(h]h ]h"]h$]h&]uh1jdhjhhhhhNubeh}(h]hardware-dispatch-queuesah ]h"]hardware dispatch queuesah$]h&]uh1hhjFhhhhhKgubh)}(hhh](h)}(hTag-based completionh]hTag-based completion}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXIn order to indicate which request has been completed, every request is identified by an integer, ranging from 0 to the dispatch queue size. This tag is generated by the block layer and later reused by the device driver, removing the need to create a redundant identifier. When a request is completed in the driver, the tag is sent back to the block layer to notify it of the finalization. This removes the need to do a linear search to find out which IO has been completed.h]hXIn order to indicate which request has been completed, every request is identified by an integer, ranging from 0 to the dispatch queue size. This tag is generated by the block layer and later reused by the device driver, removing the need to create a redundant identifier. When a request is completed in the driver, the tag is sent back to the block layer to notify it of the finalization. This removes the need to do a linear search to find out which IO has been completed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]tag-based-completionah ]h"]tag-based completionah$]h&]uh1hhjFhhhhhKubeh}(h]j2ah ]h"] operationah$]h&]uh1hhhhhhhhK*jKubh)}(hhh](h)}(hFurther readingh]hFurther reading}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh bullet_list)}(hhh](h list_item)}(hj`Linux Block IO: Introducing Multi-queue SSD Access on Multi-core Systems `_ h]h)}(hi`Linux Block IO: Introducing Multi-queue SSD Access on Multi-core Systems `_h](j )}(hjh]hHLinux Block IO: Introducing Multi-queue SSD Access on Multi-core Systems}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameHLinux Block IO: Introducing Multi-queue SSD Access on Multi-core Systemsjhttp://kernel.dk/blk-mq.pdfuh1jhjubj)}(h h]h}(h]Glinux-block-io-introducing-multi-queue-ssd-access-on-multi-core-systemsah ]h"]Hlinux block io: introducing multi-queue ssd access on multi-core systemsah$]h&]refurijuh1jjKhjubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hA`NOOP scheduler `_ h]h)}(h@`NOOP scheduler `_h](j )}(hjh]hNOOP scheduler}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameNOOP schedulerj,https://en.wikipedia.org/wiki/Noop_scheduleruh1jhjubj)}(h/ h]h}(h]noop-schedulerah ]h"]noop schedulerah$]h&]refurijuh1jjKhjubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hY`Null block device driver `_ h]h)}(hX`Null block device driver `_h](j )}(hj4h]hNull block device driver}(hj6hhhNhNubah}(h]h ]h"]h$]h&]nameNull block device driverj:https://www.kernel.org/doc/html/latest/block/null_blk.htmluh1jhj2ubj)}(h= h]h}(h]null-block-device-driverah ]h"]null block device driverah$]h&]refurijEuh1jjKhj2ubeh}(h]h ]h"]h$]h&]uh1hhhhKhj.ubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]bullet-uh1jhhhKhjhhubeh}(h]further-readingah ]h"]further readingah$]h&]uh1hhhhhhhhKubeh}(h] introductionah ]h"] introductionah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(hSource code documentationh]hSource code documentation}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjwhhhhhKubhindex)}(hhh]h}(h]h ]h"]h$]h&]entries](singleblk_eh_timer_return (C enum)c.blk_eh_timer_returnhNtauh1jhjwhhhNhNubhdesc)}(hhh](hdesc_signature)}(hblk_eh_timer_returnh]hdesc_signature_line)}(henum blk_eh_timer_returnh](hdesc_sig_keyword)}(henumh]henum}(hjhhhNhNubah}(h]h ]kah"]h$]h&]uh1jhjhhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKubhdesc_sig_space)}(h h]h }(hjhhhNhNubah}(h]h ]wah"]h$]h&]uh1jhjhhhjhKubh desc_name)}(hblk_eh_timer_returnh]h desc_sig_name)}(hjh]hblk_eh_timer_return}(hjhhhNhNubah}(h]h ]nah"]h$]h&]uh1jhjubah}(h]h ](sig-namedescnameeh"]h$]h&]hhuh1jhjhhhjhKubeh}(h]h ]h"]h$]h&]hh add_permalinkuh1jsphinx_line_type declaratorhjhhhjhKubah}(h]jah ](sig sig-objecteh"]h$]h&] is_multiline _toc_parts) _toc_namehuh1jhjhKhjhhubh desc_content)}(hhh]h)}(h&How the timeout handler should proceedh]h&How the timeout handler should proceed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM hjhhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhKubeh}(h]h ](cenumeh"]h$]h&]domainjobjtypejdesctypejnoindex noindexentrynocontentsentryuh1jhhhjwhNhNubh container)}(h**Constants** ``BLK_EH_DONE`` The block driver completed the command or will complete it at a later time. ``BLK_EH_RESET_TIMER`` Reset the request timer and continue waiting for the request to complete.h](h)}(h **Constants**h]hstrong)}(hj/h]h Constants}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj-ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM$hj)ubhdefinition_list)}(hhh](hdefinition_list_item)}(h\``BLK_EH_DONE`` The block driver completed the command or will complete it at a later time. h](hterm)}(h``BLK_EH_DONE``h]j9)}(hjVh]h BLK_EH_DONE}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjTubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM(hjNubh definition)}(hhh]h)}(hKThe block driver completed the command or will complete it at a later time.h]hKThe block driver completed the command or will complete it at a later time.}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM'hjnubah}(h]h ]h"]h$]h&]uh1jlhjNubeh}(h]h ]h"]h$]h&]uh1jLhjkhM(hjIubjM)}(h```BLK_EH_RESET_TIMER`` Reset the request timer and continue waiting for the request to complete.h](jS)}(h``BLK_EH_RESET_TIMER``h]j9)}(hjh]hBLK_EH_RESET_TIMER}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM+hjubjm)}(hhh]h)}(hIReset the request timer and continue waiting for the request to complete.h]hIReset the request timer and continue waiting for the request to complete.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM+hjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhM+hjIubeh}(h]h ]h"]h$]h&]uh1jGhj)ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_mq_hw_ctx (C struct)c.blk_mq_hw_ctxhNtauh1jhjwhhhNhNubj)}(hhh](j)}(h blk_mq_hw_ctxh]j)}(hstruct blk_mq_hw_ctxh](j)}(hstructh]hstruct}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM2ubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhM2ubj)}(h blk_mq_hw_ctxh]j)}(hjh]h blk_mq_hw_ctx}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]hhuh1jhjhhhjhM2ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhjhhhjhM2ubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jhjhM2hjhhubj)}(hhh]h)}(h;State for a hardware queue facing the hardware block deviceh]h;State for a hardware queue facing the hardware block device}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM,hj+hhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhM2ubeh}(h]h ](jstructeh"]h$]h&]j!jj"jFj#jFj$j%j&uh1jhhhjwhNhNubj()}(hX**Definition**:: struct blk_mq_hw_ctx { struct { spinlock_t lock; struct list_head dispatch; unsigned long state; }; struct delayed_work run_work; cpumask_var_t cpumask; int next_cpu; int next_cpu_batch; unsigned long flags; void *sched_data; struct request_queue *queue; struct blk_flush_queue *fq; void *driver_data; struct sbitmap ctx_map; struct blk_mq_ctx *dispatch_from; unsigned int dispatch_busy; unsigned short type; unsigned short nr_ctx; struct blk_mq_ctx **ctxs; spinlock_t dispatch_wait_lock; wait_queue_entry_t dispatch_wait; atomic_t wait_index; struct blk_mq_tags *tags; struct blk_mq_tags *sched_tags; unsigned int numa_node; unsigned int queue_num; atomic_t nr_active; struct hlist_node cpuhp_online; struct hlist_node cpuhp_dead; struct kobject kobj; #ifdef CONFIG_BLK_DEBUG_FS; struct dentry *debugfs_dir; struct dentry *sched_debugfs_dir; #endif; struct list_head hctx_list; }; **Members** ``{unnamed_struct}`` anonymous ``lock`` Protects the dispatch list. ``dispatch`` Used for requests that are ready to be dispatched to the hardware but for some reason (e.g. lack of resources) could not be sent to the hardware. As soon as the driver can send new requests, requests at this list will be sent first for a fairer dispatch. ``state`` BLK_MQ_S_* flags. Defines the state of the hw queue (active, scheduled to restart, stopped). ``run_work`` Used for scheduling a hardware queue run at a later time. ``cpumask`` Map of available CPUs where this hctx can run. ``next_cpu`` Used by blk_mq_hctx_next_cpu() for round-robin CPU selection from **cpumask**. ``next_cpu_batch`` Counter of how many works left in the batch before changing to the next CPU. ``flags`` BLK_MQ_F_* flags. Defines the behaviour of the queue. ``sched_data`` Pointer owned by the IO scheduler attached to a request queue. It's up to the IO scheduler how to use this pointer. ``queue`` Pointer to the request queue that owns this hardware context. ``fq`` Queue of requests that need to perform a flush operation. ``driver_data`` Pointer to data owned by the block driver that created this hctx ``ctx_map`` Bitmap for each software queue. If bit is on, there is a pending request in that software queue. ``dispatch_from`` Software queue to be used when no scheduler was selected. ``dispatch_busy`` Number used by blk_mq_update_dispatch_busy() to decide if the hw_queue is busy using Exponential Weighted Moving Average algorithm. ``type`` HCTX_TYPE_* flags. Type of hardware queue. ``nr_ctx`` Number of software queues. ``ctxs`` Array of software queues. ``dispatch_wait_lock`` Lock for dispatch_wait queue. ``dispatch_wait`` Waitqueue to put requests when there is no tag available at the moment, to wait for another try in the future. ``wait_index`` Index of next available dispatch_wait queue to insert requests. ``tags`` Tags owned by the block driver. A tag at this set is only assigned when a request is dispatched from a hardware queue. ``sched_tags`` Tags owned by I/O scheduler. If there is an I/O scheduler associated with a request queue, a tag is assigned when that request is allocated. Else, this member is not used. ``numa_node`` NUMA node the storage adapter has been connected to. ``queue_num`` Index of this hardware queue. ``nr_active`` Number of active requests. Only used when a tag set is shared across request queues. ``cpuhp_online`` List to store request if CPU is going to die ``cpuhp_dead`` List to store request if some CPU die. ``kobj`` Kernel object for sysfs. ``debugfs_dir`` debugfs directory for this hardware queue. Named as cpu. ``sched_debugfs_dir`` debugfs directory for the scheduler. ``hctx_list`` if this hctx is not in use, this is an entry in q->unused_hctx_list.h](h)}(h**Definition**::h](j2)}(h**Definition**h]h Definition}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjNubh:}(hjNhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM0hjJubh literal_block)}(hXstruct blk_mq_hw_ctx { struct { spinlock_t lock; struct list_head dispatch; unsigned long state; }; struct delayed_work run_work; cpumask_var_t cpumask; int next_cpu; int next_cpu_batch; unsigned long flags; void *sched_data; struct request_queue *queue; struct blk_flush_queue *fq; void *driver_data; struct sbitmap ctx_map; struct blk_mq_ctx *dispatch_from; unsigned int dispatch_busy; unsigned short type; unsigned short nr_ctx; struct blk_mq_ctx **ctxs; spinlock_t dispatch_wait_lock; wait_queue_entry_t dispatch_wait; atomic_t wait_index; struct blk_mq_tags *tags; struct blk_mq_tags *sched_tags; unsigned int numa_node; unsigned int queue_num; atomic_t nr_active; struct hlist_node cpuhp_online; struct hlist_node cpuhp_dead; struct kobject kobj; #ifdef CONFIG_BLK_DEBUG_FS; struct dentry *debugfs_dir; struct dentry *sched_debugfs_dir; #endif; struct list_head hctx_list; };h]hXstruct blk_mq_hw_ctx { struct { spinlock_t lock; struct list_head dispatch; unsigned long state; }; struct delayed_work run_work; cpumask_var_t cpumask; int next_cpu; int next_cpu_batch; unsigned long flags; void *sched_data; struct request_queue *queue; struct blk_flush_queue *fq; void *driver_data; struct sbitmap ctx_map; struct blk_mq_ctx *dispatch_from; unsigned int dispatch_busy; unsigned short type; unsigned short nr_ctx; struct blk_mq_ctx **ctxs; spinlock_t dispatch_wait_lock; wait_queue_entry_t dispatch_wait; atomic_t wait_index; struct blk_mq_tags *tags; struct blk_mq_tags *sched_tags; unsigned int numa_node; unsigned int queue_num; atomic_t nr_active; struct hlist_node cpuhp_online; struct hlist_node cpuhp_dead; struct kobject kobj; #ifdef CONFIG_BLK_DEBUG_FS; struct dentry *debugfs_dir; struct dentry *sched_debugfs_dir; #endif; struct list_head hctx_list; };}hjmsbah}(h]h ]h"]h$]h&]hhuh1jkhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM2hjJubh)}(h **Members**h]j2)}(hj~h]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj|ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMYhjJubjH)}(hhh](jM)}(h``{unnamed_struct}`` anonymous h](jS)}(h``{unnamed_struct}``h]j9)}(hjh]h{unnamed_struct}}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM\hjubjm)}(hhh]h)}(h anonymoush]h anonymous}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM\hjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhM\hjubjM)}(h%``lock`` Protects the dispatch list. h](jS)}(h``lock``h]j9)}(hjh]hlock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM0hjubjm)}(hhh]h)}(hProtects the dispatch list.h]hProtects the dispatch list.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM0hjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhM0hjubjM)}(hX ``dispatch`` Used for requests that are ready to be dispatched to the hardware but for some reason (e.g. lack of resources) could not be sent to the hardware. As soon as the driver can send new requests, requests at this list will be sent first for a fairer dispatch. h](jS)}(h ``dispatch``h]j9)}(hjh]hdispatch}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM9hj ubjm)}(hhh]h)}(hUsed for requests that are ready to be dispatched to the hardware but for some reason (e.g. lack of resources) could not be sent to the hardware. As soon as the driver can send new requests, requests at this list will be sent first for a fairer dispatch.h]hUsed for requests that are ready to be dispatched to the hardware but for some reason (e.g. lack of resources) could not be sent to the hardware. As soon as the driver can send new requests, requests at this list will be sent first for a fairer dispatch.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM5hj%ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj$hM9hjubjM)}(hg``state`` BLK_MQ_S_* flags. Defines the state of the hw queue (active, scheduled to restart, stopped). h](jS)}(h ``state``h]j9)}(hjIh]hstate}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjGubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM>hjCubjm)}(hhh]h)}(h\BLK_MQ_S_* flags. Defines the state of the hw queue (active, scheduled to restart, stopped).h]h\BLK_MQ_S_* flags. Defines the state of the hw queue (active, scheduled to restart, stopped).}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM=hj_ubah}(h]h ]h"]h$]h&]uh1jlhjCubeh}(h]h ]h"]h$]h&]uh1jLhj^hM>hjubjM)}(hG``run_work`` Used for scheduling a hardware queue run at a later time. h](jS)}(h ``run_work``h]j9)}(hjh]hrun_work}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMDhj}ubjm)}(hhh]h)}(h9Used for scheduling a hardware queue run at a later time.h]h9Used for scheduling a hardware queue run at a later time.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMDhjubah}(h]h ]h"]h$]h&]uh1jlhj}ubeh}(h]h ]h"]h$]h&]uh1jLhjhMDhjubjM)}(h;``cpumask`` Map of available CPUs where this hctx can run. h](jS)}(h ``cpumask``h]j9)}(hjh]hcpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhjubjm)}(hhh]h)}(h.Map of available CPUs where this hctx can run.h]h.Map of available CPUs where this hctx can run.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhKhjubjM)}(h\``next_cpu`` Used by blk_mq_hctx_next_cpu() for round-robin CPU selection from **cpumask**. h](jS)}(h ``next_cpu``h]j9)}(hjh]hnext_cpu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMKhjubjm)}(hhh]h)}(hNUsed by blk_mq_hctx_next_cpu() for round-robin CPU selection from **cpumask**.h](hBUsed by blk_mq_hctx_next_cpu() for round-robin CPU selection from }(hjhhhNhNubj2)}(h **cpumask**h]hcpumask}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMJhj ubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhj hMKhjubjM)}(h```next_cpu_batch`` Counter of how many works left in the batch before changing to the next CPU. h](jS)}(h``next_cpu_batch``h]j9)}(hjAh]hnext_cpu_batch}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj?ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMPhj;ubjm)}(hhh]h)}(hLCounter of how many works left in the batch before changing to the next CPU.h]hLCounter of how many works left in the batch before changing to the next CPU.}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMOhjWubah}(h]h ]h"]h$]h&]uh1jlhj;ubeh}(h]h ]h"]h$]h&]uh1jLhjVhMPhjubjM)}(h@``flags`` BLK_MQ_F_* flags. Defines the behaviour of the queue. h](jS)}(h ``flags``h]j9)}(hj{h]hflags}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjyubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhjuubjm)}(hhh]h)}(h5BLK_MQ_F_* flags. Defines the behaviour of the queue.h]h5BLK_MQ_F_* flags. Defines the behaviour of the queue.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhKhjubah}(h]h ]h"]h$]h&]uh1jlhjuubeh}(h]h ]h"]h$]h&]uh1jLhjhKhjubjM)}(h``sched_data`` Pointer owned by the IO scheduler attached to a request queue. It's up to the IO scheduler how to use this pointer. h](jS)}(h``sched_data``h]j9)}(hjh]h sched_data}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMYhjubjm)}(hhh]h)}(hsPointer owned by the IO scheduler attached to a request queue. It's up to the IO scheduler how to use this pointer.h]huPointer owned by the IO scheduler attached to a request queue. It’s up to the IO scheduler how to use this pointer.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMXhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMYhjubjM)}(hH``queue`` Pointer to the request queue that owns this hardware context. h](jS)}(h ``queue``h]j9)}(hjh]hqueue}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM]hjubjm)}(hhh]h)}(h=Pointer to the request queue that owns this hardware context.h]h=Pointer to the request queue that owns this hardware context.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM]hjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhM]hjubjM)}(hA``fq`` Queue of requests that need to perform a flush operation. h](jS)}(h``fq``h]j9)}(hj'h]hfq}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj%ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj!ubjm)}(hhh]h)}(h9Queue of requests that need to perform a flush operation.h]h9Queue of requests that need to perform a flush operation.}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj<hKhj=ubah}(h]h ]h"]h$]h&]uh1jlhj!ubeh}(h]h ]h"]h$]h&]uh1jLhj<hKhjubjM)}(hQ``driver_data`` Pointer to data owned by the block driver that created this hctx h](jS)}(h``driver_data``h]j9)}(hj`h]h driver_data}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj^ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMehjZubjm)}(hhh]h)}(h@Pointer to data owned by the block driver that created this hctxh]h@Pointer to data owned by the block driver that created this hctx}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMdhjvubah}(h]h ]h"]h$]h&]uh1jlhjZubeh}(h]h ]h"]h$]h&]uh1jLhjuhMehjubjM)}(hm``ctx_map`` Bitmap for each software queue. If bit is on, there is a pending request in that software queue. h](jS)}(h ``ctx_map``h]j9)}(hjh]hctx_map}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMkhjubjm)}(hhh]h)}(h`Bitmap for each software queue. If bit is on, there is a pending request in that software queue.h]h`Bitmap for each software queue. If bit is on, there is a pending request in that software queue.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMjhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMkhjubjM)}(hL``dispatch_from`` Software queue to be used when no scheduler was selected. h](jS)}(h``dispatch_from``h]j9)}(hjh]h dispatch_from}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMqhjubjm)}(hhh]h)}(h9Software queue to be used when no scheduler was selected.h]h9Software queue to be used when no scheduler was selected.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMphjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMqhjubjM)}(h``dispatch_busy`` Number used by blk_mq_update_dispatch_busy() to decide if the hw_queue is busy using Exponential Weighted Moving Average algorithm. h](jS)}(h``dispatch_busy``h]j9)}(hj h]h dispatch_busy}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMwhj ubjm)}(hhh]h)}(hNumber used by blk_mq_update_dispatch_busy() to decide if the hw_queue is busy using Exponential Weighted Moving Average algorithm.h]hNumber used by blk_mq_update_dispatch_busy() to decide if the hw_queue is busy using Exponential Weighted Moving Average algorithm.}(hj' hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMuhj$ ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj# hMwhjubjM)}(h4``type`` HCTX_TYPE_* flags. Type of hardware queue. h](jS)}(h``type``h]j9)}(hjH h]htype}(hjJ hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjF ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhjB ubjm)}(hhh]h)}(h*HCTX_TYPE_* flags. Type of hardware queue.h]h*HCTX_TYPE_* flags. Type of hardware queue.}(hja hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj] hKhj^ ubah}(h]h ]h"]h$]h&]uh1jlhjB ubeh}(h]h ]h"]h$]h&]uh1jLhj] hKhjubjM)}(h&``nr_ctx`` Number of software queues. h](jS)}(h ``nr_ctx``h]j9)}(hj h]hnr_ctx}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj{ ubjm)}(hhh]h)}(hNumber of software queues.h]hNumber of software queues.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1jlhj{ ubeh}(h]h ]h"]h$]h&]uh1jLhj hKhjubjM)}(h#``ctxs`` Array of software queues. h](jS)}(h``ctxs``h]j9)}(hj h]hctxs}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj ubjm)}(hhh]h)}(hArray of software queues.h]hArray of software queues.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hKhjubjM)}(h5``dispatch_wait_lock`` Lock for dispatch_wait queue. h](jS)}(h``dispatch_wait_lock``h]j9)}(hj h]hdispatch_wait_lock}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj ubjm)}(hhh]h)}(hLock for dispatch_wait queue.h]hLock for dispatch_wait queue.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hKhjubjM)}(h``dispatch_wait`` Waitqueue to put requests when there is no tag available at the moment, to wait for another try in the future. h](jS)}(h``dispatch_wait``h]j9)}(hj, h]h dispatch_wait}(hj. hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj* ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj& ubjm)}(hhh]h)}(hnWaitqueue to put requests when there is no tag available at the moment, to wait for another try in the future.h]hnWaitqueue to put requests when there is no tag available at the moment, to wait for another try in the future.}(hjE hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjB ubah}(h]h ]h"]h$]h&]uh1jlhj& ubeh}(h]h ]h"]h$]h&]uh1jLhjA hMhjubjM)}(hO``wait_index`` Index of next available dispatch_wait queue to insert requests. h](jS)}(h``wait_index``h]j9)}(hjf h]h wait_index}(hjh hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjd ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj` ubjm)}(hhh]h)}(h?Index of next available dispatch_wait queue to insert requests.h]h?Index of next available dispatch_wait queue to insert requests.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj| ubah}(h]h ]h"]h$]h&]uh1jlhj` ubeh}(h]h ]h"]h$]h&]uh1jLhj{ hMhjubjM)}(h``tags`` Tags owned by the block driver. A tag at this set is only assigned when a request is dispatched from a hardware queue. h](jS)}(h``tags``h]j9)}(hj h]htags}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubjm)}(hhh]h)}(hvTags owned by the block driver. A tag at this set is only assigned when a request is dispatched from a hardware queue.h]hvTags owned by the block driver. A tag at this set is only assigned when a request is dispatched from a hardware queue.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hMhjubjM)}(h``sched_tags`` Tags owned by I/O scheduler. If there is an I/O scheduler associated with a request queue, a tag is assigned when that request is allocated. Else, this member is not used. h](jS)}(h``sched_tags``h]j9)}(hj h]h sched_tags}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubjm)}(hhh]h)}(hTags owned by I/O scheduler. If there is an I/O scheduler associated with a request queue, a tag is assigned when that request is allocated. Else, this member is not used.h]hTags owned by I/O scheduler. If there is an I/O scheduler associated with a request queue, a tag is assigned when that request is allocated. Else, this member is not used.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hMhjubjM)}(hC``numa_node`` NUMA node the storage adapter has been connected to. h](jS)}(h ``numa_node``h]j9)}(hj h]h numa_node}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj ubjm)}(hhh]h)}(h4NUMA node the storage adapter has been connected to.h]h4NUMA node the storage adapter has been connected to.}(hj- hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj) hKhj* ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj) hKhjubjM)}(h,``queue_num`` Index of this hardware queue. h](jS)}(h ``queue_num``h]j9)}(hjM h]h queue_num}(hjO hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjK ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhjG ubjm)}(hhh]h)}(hIndex of this hardware queue.h]hIndex of this hardware queue.}(hjf hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjb hKhjc ubah}(h]h ]h"]h$]h&]uh1jlhjG ubeh}(h]h ]h"]h$]h&]uh1jLhjb hKhjubjM)}(hc``nr_active`` Number of active requests. Only used when a tag set is shared across request queues. h](jS)}(h ``nr_active``h]j9)}(hj h]h nr_active}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubjm)}(hhh]h)}(hTNumber of active requests. Only used when a tag set is shared across request queues.h]hTNumber of active requests. Only used when a tag set is shared across request queues.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hMhjubjM)}(h>``cpuhp_online`` List to store request if CPU is going to die h](jS)}(h``cpuhp_online``h]j9)}(hj h]h cpuhp_online}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj ubjm)}(hhh]h)}(h,List to store request if CPU is going to dieh]h,List to store request if CPU is going to die}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hKhjubjM)}(h6``cpuhp_dead`` List to store request if some CPU die. h](jS)}(h``cpuhp_dead``h]j9)}(hj h]h cpuhp_dead}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj ubjm)}(hhh]h)}(h&List to store request if some CPU die.h]h&List to store request if some CPU die.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hKhjubjM)}(h"``kobj`` Kernel object for sysfs. h](jS)}(h``kobj``h]j9)}(hj2 h]hkobj}(hj4 hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj0 ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj, ubjm)}(hhh]h)}(hKernel object for sysfs.h]hKernel object for sysfs.}(hjK hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjG hKhjH ubah}(h]h ]h"]h$]h&]uh1jlhj, ubeh}(h]h ]h"]h$]h&]uh1jLhjG hKhjubjM)}(hU``debugfs_dir`` debugfs directory for this hardware queue. Named as cpu. h](jS)}(h``debugfs_dir``h]j9)}(hjk h]h debugfs_dir}(hjm hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hji ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhje ubjm)}(hhh]h)}(hDdebugfs directory for this hardware queue. Named as cpu.h]hDdebugfs directory for this hardware queue. Named as cpu.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubah}(h]h ]h"]h$]h&]uh1jlhje ubeh}(h]h ]h"]h$]h&]uh1jLhj hMhjubjM)}(h;``sched_debugfs_dir`` debugfs directory for the scheduler. h](jS)}(h``sched_debugfs_dir``h]j9)}(hj h]hsched_debugfs_dir}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhKhj ubjm)}(hhh]h)}(h$debugfs directory for the scheduler.h]h$debugfs directory for the scheduler.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hKhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hKhjubjM)}(hR``hctx_list`` if this hctx is not in use, this is an entry in q->unused_hctx_list.h](jS)}(h ``hctx_list``h]j9)}(hj h]h hctx_list}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubjm)}(hhh]h)}(hDif this hctx is not in use, this is an entry in q->unused_hctx_list.h]hDif this hctx is not in use, this is an entry in q->unused_hctx_list.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hMhjubeh}(h]h ]h"]h$]h&]uh1jGhjJubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_mq_queue_map (C struct)c.blk_mq_queue_maphNtauh1jhjwhhhNhNubj)}(hhh](j)}(hblk_mq_queue_maph]j)}(hstruct blk_mq_queue_maph](j)}(hjh]hstruct}(hj7 hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3 hhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMubj)}(h h]h }(hjE hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3 hhhjD hMubj)}(hblk_mq_queue_maph]j)}(hj1 h]hblk_mq_queue_map}(hjW hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjS ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj3 hhhjD hMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj/ hhhjD hMubah}(h]j* ah ](jjeh"]h$]h&]jj)jhuh1jhjD hMhj, hhubj)}(hhh]h)}(h&Map software queues to hardware queuesh]h&Map software queues to hardware queues}(hjy hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjv hhubah}(h]h ]h"]h$]h&]uh1jhj, hhhjD hMubeh}(h]h ](jstructeh"]h$]h&]j!jj"j j#j j$j%j&uh1jhhhjwhNhNubj()}(hXG**Definition**:: struct blk_mq_queue_map { unsigned int *mq_map; unsigned int nr_queues; unsigned int queue_offset; }; **Members** ``mq_map`` CPU ID to hardware queue index map. This is an array with nr_cpu_ids elements. Each element has a value in the range [**queue_offset**, **queue_offset** + **nr_queues**). ``nr_queues`` Number of hardware queues to map CPU IDs onto. ``queue_offset`` First hardware queue to map onto. Used by the PCIe NVMe driver to map each hardware queue type (enum hctx_type) onto a distinct set of hardware queues.h](h)}(h**Definition**::h](j2)}(h**Definition**h]h Definition}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj ubh:}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubjl)}(hqstruct blk_mq_queue_map { unsigned int *mq_map; unsigned int nr_queues; unsigned int queue_offset; };h]hqstruct blk_mq_queue_map { unsigned int *mq_map; unsigned int nr_queues; unsigned int queue_offset; };}hj sbah}(h]h ]h"]h$]h&]hhuh1jkhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubh)}(h **Members**h]j2)}(hj h]hMembers}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubjH)}(hhh](jM)}(h``mq_map`` CPU ID to hardware queue index map. This is an array with nr_cpu_ids elements. Each element has a value in the range [**queue_offset**, **queue_offset** + **nr_queues**). h](jS)}(h ``mq_map``h]j9)}(hj h]hmq_map}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubjm)}(hhh]h)}(hCPU ID to hardware queue index map. This is an array with nr_cpu_ids elements. Each element has a value in the range [**queue_offset**, **queue_offset** + **nr_queues**).h](hvCPU ID to hardware queue index map. This is an array with nr_cpu_ids elements. Each element has a value in the range [}(hj hhhNhNubj2)}(h**queue_offset**h]h queue_offset}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj ubh, }(hj hhhNhNubj2)}(h**queue_offset**h]h queue_offset}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj ubh + }(hj hhhNhNubj2)}(h **nr_queues**h]h nr_queues}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj ubh).}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hMhj ubjM)}(h=``nr_queues`` Number of hardware queues to map CPU IDs onto. h](jS)}(h ``nr_queues``h]j9)}(hjVh]h nr_queues}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjTubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjPubjm)}(hhh]h)}(h.Number of hardware queues to map CPU IDs onto.h]h.Number of hardware queues to map CPU IDs onto.}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhjkhMhjlubah}(h]h ]h"]h$]h&]uh1jlhjPubeh}(h]h ]h"]h$]h&]uh1jLhjkhMhj ubjM)}(h``queue_offset`` First hardware queue to map onto. Used by the PCIe NVMe driver to map each hardware queue type (enum hctx_type) onto a distinct set of hardware queues.h](jS)}(h``queue_offset``h]j9)}(hjh]h queue_offset}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hFirst hardware queue to map onto. Used by the PCIe NVMe driver to map each hardware queue type (enum hctx_type) onto a distinct set of hardware queues.h]hFirst hardware queue to map onto. Used by the PCIe NVMe driver to map each hardware queue type (enum hctx_type) onto a distinct set of hardware queues.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhj ubeh}(h]h ]h"]h$]h&]uh1jGhj ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jhctx_type (C enum) c.hctx_typehNtauh1jhjwhhhNhNubj)}(hhh](j)}(h hctx_typeh]j)}(henum hctx_typeh](j)}(hjh]henum}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhjhMubj)}(h hctx_typeh]j)}(hjh]h hctx_type}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]hhuh1jhjhhhjhMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhjhhhjhMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jhjhMhjhhubj)}(hhh]h)}(hType of hardware queueh]hType of hardware queue}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj(hhubah}(h]h ]h"]h$]h&]uh1jhjhhhjhMubeh}(h]h ](jenumeh"]h$]h&]j!jj"jCj#jCj$j%j&uh1jhhhjwhNhNubj()}(h**Constants** ``HCTX_TYPE_DEFAULT`` All I/O not otherwise accounted for. ``HCTX_TYPE_READ`` Just for READ I/O. ``HCTX_TYPE_POLL`` Polled I/O of any kind. ``HCTX_MAX_TYPES`` Number of types of hctx.h](h)}(h **Constants**h]j2)}(hjMh]h Constants}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjKubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjGubjH)}(hhh](jM)}(h;``HCTX_TYPE_DEFAULT`` All I/O not otherwise accounted for. h](jS)}(h``HCTX_TYPE_DEFAULT``h]j9)}(hjlh]hHCTX_TYPE_DEFAULT}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjfubjm)}(hhh]h)}(h$All I/O not otherwise accounted for.h]h$All I/O not otherwise accounted for.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jlhjfubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjcubjM)}(h&``HCTX_TYPE_READ`` Just for READ I/O. h](jS)}(h``HCTX_TYPE_READ``h]j9)}(hjh]hHCTX_TYPE_READ}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hJust for READ I/O.h]hJust for READ I/O.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjcubjM)}(h+``HCTX_TYPE_POLL`` Polled I/O of any kind. h](jS)}(h``HCTX_TYPE_POLL``h]j9)}(hjh]hHCTX_TYPE_POLL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hPolled I/O of any kind.h]hPolled I/O of any kind.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjcubjM)}(h+``HCTX_MAX_TYPES`` Number of types of hctx.h](jS)}(h``HCTX_MAX_TYPES``h]j9)}(hjh]hHCTX_MAX_TYPES}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hNumber of types of hctx.h]hNumber of types of hctx.}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj-ubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhj,hMhjcubeh}(h]h ]h"]h$]h&]uh1jGhjGubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_mq_tag_set (C struct)c.blk_mq_tag_sethNtauh1jhjwhhhNhNubj)}(hhh](j)}(hblk_mq_tag_seth]j)}(hstruct blk_mq_tag_seth](j)}(hjh]hstruct}(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmhhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMubj)}(h h]h }(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjmhhhj~hMubj)}(hblk_mq_tag_seth]j)}(hjkh]hblk_mq_tag_set}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]hhuh1jhjmhhhj~hMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhjihhhj~hMubah}(h]jdah ](jjeh"]h$]h&]jj)jhuh1jhj~hMhjfhhubj)}(hhh]h)}(h1tag set that can be shared between request queuesh]h1tag set that can be shared between request queues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjhhubah}(h]h ]h"]h$]h&]uh1jhjfhhhj~hMubeh}(h]h ](jstructeh"]h$]h&]j!jj"jj#jj$j%j&uh1jhhhjwhNhNubj()}(hX**Definition**:: struct blk_mq_tag_set { const struct blk_mq_ops *ops; struct blk_mq_queue_map map[HCTX_MAX_TYPES]; unsigned int nr_maps; unsigned int nr_hw_queues; unsigned int queue_depth; unsigned int reserved_tags; unsigned int cmd_size; int numa_node; unsigned int timeout; unsigned int flags; void *driver_data; struct blk_mq_tags **tags; struct blk_mq_tags *shared_tags; struct mutex tag_list_lock; struct list_head tag_list; struct srcu_struct *srcu; }; **Members** ``ops`` Pointers to functions that implement block driver behavior. ``map`` One or more ctx -> hctx mappings. One map exists for each hardware queue type (enum hctx_type) that the driver wishes to support. There are no restrictions on maps being of the same size, and it's perfectly legal to share maps between types. ``nr_maps`` Number of elements in the **map** array. A number in the range [1, HCTX_MAX_TYPES]. ``nr_hw_queues`` Number of hardware queues supported by the block driver that owns this data structure. ``queue_depth`` Number of tags per hardware queue, reserved tags included. ``reserved_tags`` Number of tags to set aside for BLK_MQ_REQ_RESERVED tag allocations. ``cmd_size`` Number of additional bytes to allocate per request. The block driver owns these additional bytes. ``numa_node`` NUMA node the storage adapter has been connected to. ``timeout`` Request processing timeout in jiffies. ``flags`` Zero or more BLK_MQ_F_* flags. ``driver_data`` Pointer to data owned by the block driver that created this tag set. ``tags`` Tag sets. One tag set per hardware queue. Has **nr_hw_queues** elements. ``shared_tags`` Shared set of tags. Has **nr_hw_queues** elements. If set, shared by all **tags**. ``tag_list_lock`` Serializes tag_list accesses. ``tag_list`` List of the request queues that use this tag set. See also request_queue.tag_set_list. ``srcu`` Use as lock when type of the request queue is blocking (BLK_MQ_F_BLOCKING).h](h)}(h**Definition**::h](j2)}(h**Definition**h]h Definition}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjl)}(hXjstruct blk_mq_tag_set { const struct blk_mq_ops *ops; struct blk_mq_queue_map map[HCTX_MAX_TYPES]; unsigned int nr_maps; unsigned int nr_hw_queues; unsigned int queue_depth; unsigned int reserved_tags; unsigned int cmd_size; int numa_node; unsigned int timeout; unsigned int flags; void *driver_data; struct blk_mq_tags **tags; struct blk_mq_tags *shared_tags; struct mutex tag_list_lock; struct list_head tag_list; struct srcu_struct *srcu; };h]hXjstruct blk_mq_tag_set { const struct blk_mq_ops *ops; struct blk_mq_queue_map map[HCTX_MAX_TYPES]; unsigned int nr_maps; unsigned int nr_hw_queues; unsigned int queue_depth; unsigned int reserved_tags; unsigned int cmd_size; int numa_node; unsigned int timeout; unsigned int flags; void *driver_data; struct blk_mq_tags **tags; struct blk_mq_tags *shared_tags; struct mutex tag_list_lock; struct list_head tag_list; struct srcu_struct *srcu; };}hjsbah}(h]h ]h"]h$]h&]hhuh1jkhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubh)}(h **Members**h]j2)}(hjh]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjH)}(hhh](jM)}(hD``ops`` Pointers to functions that implement block driver behavior. h](jS)}(h``ops``h]j9)}(hj h]hops}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(h;Pointers to functions that implement block driver behavior.h]h;Pointers to functions that implement block driver behavior.}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj5hMhj6ubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhj5hMhjubjM)}(h``map`` One or more ctx -> hctx mappings. One map exists for each hardware queue type (enum hctx_type) that the driver wishes to support. There are no restrictions on maps being of the same size, and it's perfectly legal to share maps between types. h](jS)}(h``map``h]j9)}(hjYh]hmap}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjWubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjSubjm)}(hhh]h)}(hOne or more ctx -> hctx mappings. One map exists for each hardware queue type (enum hctx_type) that the driver wishes to support. There are no restrictions on maps being of the same size, and it's perfectly legal to share maps between types.h]hOne or more ctx -> hctx mappings. One map exists for each hardware queue type (enum hctx_type) that the driver wishes to support. There are no restrictions on maps being of the same size, and it’s perfectly legal to share maps between types.}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjoubah}(h]h ]h"]h$]h&]uh1jlhjSubeh}(h]h ]h"]h$]h&]uh1jLhjnhMhjubjM)}(h```nr_maps`` Number of elements in the **map** array. A number in the range [1, HCTX_MAX_TYPES]. h](jS)}(h ``nr_maps``h]j9)}(hjh]hnr_maps}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hSNumber of elements in the **map** array. A number in the range [1, HCTX_MAX_TYPES].h](hNumber of elements in the }(hjhhhNhNubj2)}(h**map**h]hmap}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubh2 array. A number in the range [1, HCTX_MAX_TYPES].}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(hh``nr_hw_queues`` Number of hardware queues supported by the block driver that owns this data structure. h](jS)}(h``nr_hw_queues``h]j9)}(hjh]h nr_hw_queues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hVNumber of hardware queues supported by the block driver that owns this data structure.h]hVNumber of hardware queues supported by the block driver that owns this data structure.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(hK``queue_depth`` Number of tags per hardware queue, reserved tags included. h](jS)}(h``queue_depth``h]j9)}(hjh]h queue_depth}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(h:Number of tags per hardware queue, reserved tags included.h]h:Number of tags per hardware queue, reserved tags included.}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj.hMhj/ubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhj.hMhjubjM)}(hW``reserved_tags`` Number of tags to set aside for BLK_MQ_REQ_RESERVED tag allocations. h](jS)}(h``reserved_tags``h]j9)}(hjRh]h reserved_tags}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjPubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjLubjm)}(hhh]h)}(hDNumber of tags to set aside for BLK_MQ_REQ_RESERVED tag allocations.h]hDNumber of tags to set aside for BLK_MQ_REQ_RESERVED tag allocations.}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjhubah}(h]h ]h"]h$]h&]uh1jlhjLubeh}(h]h ]h"]h$]h&]uh1jLhjghMhjubjM)}(ho``cmd_size`` Number of additional bytes to allocate per request. The block driver owns these additional bytes. h](jS)}(h ``cmd_size``h]j9)}(hjh]hcmd_size}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(haNumber of additional bytes to allocate per request. The block driver owns these additional bytes.h]haNumber of additional bytes to allocate per request. The block driver owns these additional bytes.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(hC``numa_node`` NUMA node the storage adapter has been connected to. h](jS)}(h ``numa_node``h]j9)}(hjh]h numa_node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(h4NUMA node the storage adapter has been connected to.h]h4NUMA node the storage adapter has been connected to.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(h3``timeout`` Request processing timeout in jiffies. h](jS)}(h ``timeout``h]j9)}(hjh]htimeout}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(h&Request processing timeout in jiffies.h]h&Request processing timeout in jiffies.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(h)``flags`` Zero or more BLK_MQ_F_* flags. h](jS)}(h ``flags``h]j9)}(hj8h]hflags}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj6ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj2ubjm)}(hhh]h)}(hZero or more BLK_MQ_F_* flags.h]hZero or more BLK_MQ_F_* flags.}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjMhMhjNubah}(h]h ]h"]h$]h&]uh1jlhj2ubeh}(h]h ]h"]h$]h&]uh1jLhjMhMhjubjM)}(hU``driver_data`` Pointer to data owned by the block driver that created this tag set. h](jS)}(h``driver_data``h]j9)}(hjqh]h driver_data}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjoubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjkubjm)}(hhh]h)}(hDPointer to data owned by the block driver that created this tag set.h]hDPointer to data owned by the block driver that created this tag set.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubah}(h]h ]h"]h$]h&]uh1jlhjkubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(hR``tags`` Tag sets. One tag set per hardware queue. Has **nr_hw_queues** elements. h](jS)}(h``tags``h]j9)}(hjh]htags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hHTag sets. One tag set per hardware queue. Has **nr_hw_queues** elements.h](h.Tag sets. One tag set per hardware queue. Has }(hjhhhNhNubj2)}(h**nr_hw_queues**h]h nr_hw_queues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubh elements.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(hc``shared_tags`` Shared set of tags. Has **nr_hw_queues** elements. If set, shared by all **tags**. h](jS)}(h``shared_tags``h]j9)}(hjh]h shared_tags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hRShared set of tags. Has **nr_hw_queues** elements. If set, shared by all **tags**.h](hShared set of tags. Has }(hjhhhNhNubj2)}(h**nr_hw_queues**h]h nr_hw_queues}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubh! elements. If set, shared by all }(hjhhhNhNubj2)}(h**tags**h]htags}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj ubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhj hMhjubjM)}(h0``tag_list_lock`` Serializes tag_list accesses. h](jS)}(h``tag_list_lock``h]j9)}(hjUh]h tag_list_lock}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjSubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjOubjm)}(hhh]h)}(hSerializes tag_list accesses.h]hSerializes tag_list accesses.}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjjhMhjkubah}(h]h ]h"]h$]h&]uh1jlhjOubeh}(h]h ]h"]h$]h&]uh1jLhjjhMhjubjM)}(hd``tag_list`` List of the request queues that use this tag set. See also request_queue.tag_set_list. h](jS)}(h ``tag_list``h]j9)}(hjh]htag_list}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hVList of the request queues that use this tag set. See also request_queue.tag_set_list.h]hVList of the request queues that use this tag set. See also request_queue.tag_set_list.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(hT``srcu`` Use as lock when type of the request queue is blocking (BLK_MQ_F_BLOCKING).h](jS)}(h``srcu``h]j9)}(hjh]hsrcu}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hKUse as lock when type of the request queue is blocking (BLK_MQ_F_BLOCKING).h]hKUse as lock when type of the request queue is blocking (BLK_MQ_F_BLOCKING).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubeh}(h]h ]h"]h$]h&]uh1jGhjubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_mq_queue_data (C struct)c.blk_mq_queue_datahNtauh1jhjwhhhNhNubj)}(hhh](j)}(hblk_mq_queue_datah]j)}(hstruct blk_mq_queue_datah](j)}(hjh]hstruct}(hj!hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMubj)}(h h]h }(hj/hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjhhhj.hMubj)}(hblk_mq_queue_datah]j)}(hjh]hblk_mq_queue_data}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj=ubah}(h]h ](jjeh"]h$]h&]hhuh1jhjhhhj.hMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhjhhhj.hMubah}(h]jah ](jjeh"]h$]h&]jj)jhuh1jhj.hMhjhhubj)}(hhh]h)}(h(Data about a request inserted in a queueh]h(Data about a request inserted in a queue}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj`hhubah}(h]h ]h"]h$]h&]uh1jhjhhhj.hMubeh}(h]h ](jstructeh"]h$]h&]j!jj"j{j#j{j$j%j&uh1jhhhjwhNhNubj()}(h**Definition**:: struct blk_mq_queue_data { struct request *rq; bool last; }; **Members** ``rq`` Request pointer. ``last`` If it is the last request in the queue.h](h)}(h**Definition**::h](j2)}(h**Definition**h]h Definition}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjl)}(hDstruct blk_mq_queue_data { struct request *rq; bool last; };h]hDstruct blk_mq_queue_data { struct request *rq; bool last; };}hjsbah}(h]h ]h"]h$]h&]hhuh1jkhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubh)}(h **Members**h]j2)}(hjh]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM hjubjH)}(hhh](jM)}(h``rq`` Request pointer. h](jS)}(h``rq``h]j9)}(hjh]hrq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hRequest pointer.h]hRequest pointer.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubjM)}(h0``last`` If it is the last request in the queue.h](jS)}(h``last``h]j9)}(hj h]hlast}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(h'If it is the last request in the queue.h]h'If it is the last request in the queue.}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMhjubeh}(h]h ]h"]h$]h&]uh1jGhjubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_mq_ops (C struct) c.blk_mq_opshNtauh1jhjwhhhNhNubj)}(hhh](j)}(h blk_mq_opsh]j)}(hstruct blk_mq_opsh](j)}(hjh]hstruct}(hjchhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj_hhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMubj)}(h h]h }(hjqhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj_hhhjphMubj)}(h blk_mq_opsh]j)}(hj]h]h blk_mq_ops}(hjhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjubah}(h]h ](jjeh"]h$]h&]hhuh1jhj_hhhjphMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj[hhhjphMubah}(h]jVah ](jjeh"]h$]h&]jj)jhuh1jhjphMhjXhhubj)}(hhh]h)}(h:Callback functions that implements block driver behaviour.h]h:Callback functions that implements block driver behaviour.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM"hjhhubah}(h]h ]h"]h$]h&]uh1jhjXhhhjphMubeh}(h]h ](jstructeh"]h$]h&]j!jj"jj#jj$j%j&uh1jhhhjwhNhNubj()}(hX **Definition**:: struct blk_mq_ops { blk_status_t (*queue_rq)(struct blk_mq_hw_ctx *, const struct blk_mq_queue_data *); void (*commit_rqs)(struct blk_mq_hw_ctx *); void (*queue_rqs)(struct rq_list *rqlist); int (*get_budget)(struct request_queue *); void (*put_budget)(struct request_queue *, int); void (*set_rq_budget_token)(struct request *, int); int (*get_rq_budget_token)(struct request *); enum blk_eh_timer_return (*timeout)(struct request *); int (*poll)(struct blk_mq_hw_ctx *, struct io_comp_batch *); void (*complete)(struct request *); int (*init_hctx)(struct blk_mq_hw_ctx *, void *, unsigned int); void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int); int (*init_request)(struct blk_mq_tag_set *set, struct request *, unsigned int, unsigned int); void (*exit_request)(struct blk_mq_tag_set *set, struct request *, unsigned int); void (*cleanup_rq)(struct request *); bool (*busy)(struct request_queue *); void (*map_queues)(struct blk_mq_tag_set *set); #ifdef CONFIG_BLK_DEBUG_FS; void (*show_rq)(struct seq_file *m, struct request *rq); #endif; }; **Members** ``queue_rq`` Queue a new request from block IO. ``commit_rqs`` If a driver uses bd->last to judge when to submit requests to hardware, it must define this function. In case of errors that make us stop issuing further requests, this hook serves the purpose of kicking the hardware (which the last request otherwise would have done). ``queue_rqs`` Queue a list of new requests. Driver is guaranteed that each request belongs to the same queue. If the driver doesn't empty the **rqlist** completely, then the rest will be queued individually by the block layer upon return. ``get_budget`` Reserve budget before queue request, once .queue_rq is run, it is driver's responsibility to release the reserved budget. Also we have to handle failure case of .get_budget for avoiding I/O deadlock. ``put_budget`` Release the reserved budget. ``set_rq_budget_token`` store rq's budget token ``get_rq_budget_token`` retrieve rq's budget token ``timeout`` Called on request timeout. ``poll`` Called to poll for completion of a specific tag. ``complete`` Mark the request as complete. ``init_hctx`` Called when the block layer side of a hardware queue has been set up, allowing the driver to allocate/init matching structures. ``exit_hctx`` Ditto for exit/teardown. ``init_request`` Called for every command allocated by the block layer to allow the driver to set up driver specific data. Tag greater than or equal to queue_depth is for setting up flush request. ``exit_request`` Ditto for exit/teardown. ``cleanup_rq`` Called before freeing one request which isn't completed yet, and usually for freeing the driver private data. ``busy`` If set, returns whether or not this queue currently is busy. ``map_queues`` This allows drivers specify their own queue mapping by overriding the setup-time function that builds the mq_map. ``show_rq`` Used by the debugfs implementation to show driver-specific information about a request.h](h)}(h**Definition**::h](j2)}(h**Definition**h]h Definition}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM&hjubjl)}(hXbstruct blk_mq_ops { blk_status_t (*queue_rq)(struct blk_mq_hw_ctx *, const struct blk_mq_queue_data *); void (*commit_rqs)(struct blk_mq_hw_ctx *); void (*queue_rqs)(struct rq_list *rqlist); int (*get_budget)(struct request_queue *); void (*put_budget)(struct request_queue *, int); void (*set_rq_budget_token)(struct request *, int); int (*get_rq_budget_token)(struct request *); enum blk_eh_timer_return (*timeout)(struct request *); int (*poll)(struct blk_mq_hw_ctx *, struct io_comp_batch *); void (*complete)(struct request *); int (*init_hctx)(struct blk_mq_hw_ctx *, void *, unsigned int); void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int); int (*init_request)(struct blk_mq_tag_set *set, struct request *, unsigned int, unsigned int); void (*exit_request)(struct blk_mq_tag_set *set, struct request *, unsigned int); void (*cleanup_rq)(struct request *); bool (*busy)(struct request_queue *); void (*map_queues)(struct blk_mq_tag_set *set); #ifdef CONFIG_BLK_DEBUG_FS; void (*show_rq)(struct seq_file *m, struct request *rq); #endif; };h]hXbstruct blk_mq_ops { blk_status_t (*queue_rq)(struct blk_mq_hw_ctx *, const struct blk_mq_queue_data *); void (*commit_rqs)(struct blk_mq_hw_ctx *); void (*queue_rqs)(struct rq_list *rqlist); int (*get_budget)(struct request_queue *); void (*put_budget)(struct request_queue *, int); void (*set_rq_budget_token)(struct request *, int); int (*get_rq_budget_token)(struct request *); enum blk_eh_timer_return (*timeout)(struct request *); int (*poll)(struct blk_mq_hw_ctx *, struct io_comp_batch *); void (*complete)(struct request *); int (*init_hctx)(struct blk_mq_hw_ctx *, void *, unsigned int); void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int); int (*init_request)(struct blk_mq_tag_set *set, struct request *, unsigned int, unsigned int); void (*exit_request)(struct blk_mq_tag_set *set, struct request *, unsigned int); void (*cleanup_rq)(struct request *); bool (*busy)(struct request_queue *); void (*map_queues)(struct blk_mq_tag_set *set); #ifdef CONFIG_BLK_DEBUG_FS; void (*show_rq)(struct seq_file *m, struct request *rq); #endif; };}hjsbah}(h]h ]h"]h$]h&]hhuh1jkhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM(hjubh)}(h **Members**h]j2)}(hjh]hMembers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM?hjubjH)}(hhh](jM)}(h0``queue_rq`` Queue a new request from block IO. h](jS)}(h ``queue_rq``h]j9)}(hjh]hqueue_rq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM(hj ubjm)}(hhh]h)}(h"Queue a new request from block IO.h]h"Queue a new request from block IO.}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj'hM(hj(ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj'hM(hj ubjM)}(hX``commit_rqs`` If a driver uses bd->last to judge when to submit requests to hardware, it must define this function. In case of errors that make us stop issuing further requests, this hook serves the purpose of kicking the hardware (which the last request otherwise would have done). h](jS)}(h``commit_rqs``h]j9)}(hjKh]h commit_rqs}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjIubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM2hjEubjm)}(hhh]h)}(hX If a driver uses bd->last to judge when to submit requests to hardware, it must define this function. In case of errors that make us stop issuing further requests, this hook serves the purpose of kicking the hardware (which the last request otherwise would have done).h]hX If a driver uses bd->last to judge when to submit requests to hardware, it must define this function. In case of errors that make us stop issuing further requests, this hook serves the purpose of kicking the hardware (which the last request otherwise would have done).}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM.hjaubah}(h]h ]h"]h$]h&]uh1jlhjEubeh}(h]h ]h"]h$]h&]uh1jLhj`hM2hj ubjM)}(h``queue_rqs`` Queue a list of new requests. Driver is guaranteed that each request belongs to the same queue. If the driver doesn't empty the **rqlist** completely, then the rest will be queued individually by the block layer upon return. h](jS)}(h ``queue_rqs``h]j9)}(hjh]h queue_rqs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM:hjubjm)}(hhh]h)}(hQueue a list of new requests. Driver is guaranteed that each request belongs to the same queue. If the driver doesn't empty the **rqlist** completely, then the rest will be queued individually by the block layer upon return.h](hQueue a list of new requests. Driver is guaranteed that each request belongs to the same queue. If the driver doesn’t empty the }(hjhhhNhNubj2)}(h **rqlist**h]hrqlist}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjubhV completely, then the rest will be queued individually by the block layer upon return.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM7hjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhM:hj ubjM)}(h``get_budget`` Reserve budget before queue request, once .queue_rq is run, it is driver's responsibility to release the reserved budget. Also we have to handle failure case of .get_budget for avoiding I/O deadlock. h](jS)}(h``get_budget``h]j9)}(hjh]h get_budget}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMBhjubjm)}(hhh]h)}(hReserve budget before queue request, once .queue_rq is run, it is driver's responsibility to release the reserved budget. Also we have to handle failure case of .get_budget for avoiding I/O deadlock.h]hReserve budget before queue request, once .queue_rq is run, it is driver’s responsibility to release the reserved budget. Also we have to handle failure case of .get_budget for avoiding I/O deadlock.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM?hjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMBhj ubjM)}(h,``put_budget`` Release the reserved budget. h](jS)}(h``put_budget``h]j9)}(hj h]h put_budget}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMGhjubjm)}(hhh]h)}(hRelease the reserved budget.h]hRelease the reserved budget.}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hMGhj!ubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhj hMGhj ubjM)}(h0``set_rq_budget_token`` store rq's budget token h](jS)}(h``set_rq_budget_token``h]j9)}(hjDh]hset_rq_budget_token}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjBubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMLhj>ubjm)}(hhh]h)}(hstore rq's budget tokenh]hstore rq’s budget token}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjYhMLhjZubah}(h]h ]h"]h$]h&]uh1jlhj>ubeh}(h]h ]h"]h$]h&]uh1jLhjYhMLhj ubjM)}(h3``get_rq_budget_token`` retrieve rq's budget token h](jS)}(h``get_rq_budget_token``h]j9)}(hj}h]hget_rq_budget_token}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj{ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMPhjwubjm)}(hhh]h)}(hretrieve rq's budget tokenh]hretrieve rq’s budget token}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMPhjubah}(h]h ]h"]h$]h&]uh1jlhjwubeh}(h]h ]h"]h$]h&]uh1jLhjhMPhj ubjM)}(h'``timeout`` Called on request timeout. h](jS)}(h ``timeout``h]j9)}(hjh]htimeout}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMUhjubjm)}(hhh]h)}(hCalled on request timeout.h]hCalled on request timeout.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMUhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMUhj ubjM)}(h:``poll`` Called to poll for completion of a specific tag. h](jS)}(h``poll``h]j9)}(hjh]hpoll}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMZhjubjm)}(hhh]h)}(h0Called to poll for completion of a specific tag.h]h0Called to poll for completion of a specific tag.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMZhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMZhj ubjM)}(h+``complete`` Mark the request as complete. h](jS)}(h ``complete``h]j9)}(hj(h]hcomplete}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj&ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM_hj"ubjm)}(hhh]h)}(hMark the request as complete.h]hMark the request as complete.}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=hM_hj>ubah}(h]h ]h"]h$]h&]uh1jlhj"ubeh}(h]h ]h"]h$]h&]uh1jLhj=hM_hj ubjM)}(h``init_hctx`` Called when the block layer side of a hardware queue has been set up, allowing the driver to allocate/init matching structures. h](jS)}(h ``init_hctx``h]j9)}(hjah]h init_hctx}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj_ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMfhj[ubjm)}(hhh]h)}(hCalled when the block layer side of a hardware queue has been set up, allowing the driver to allocate/init matching structures.h]hCalled when the block layer side of a hardware queue has been set up, allowing the driver to allocate/init matching structures.}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMdhjwubah}(h]h ]h"]h$]h&]uh1jlhj[ubeh}(h]h ]h"]h$]h&]uh1jLhjvhMfhj ubjM)}(h'``exit_hctx`` Ditto for exit/teardown. h](jS)}(h ``exit_hctx``h]j9)}(hjh]h exit_hctx}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMjhjubjm)}(hhh]h)}(hDitto for exit/teardown.h]hDitto for exit/teardown.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMjhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMjhj ubjM)}(h``init_request`` Called for every command allocated by the block layer to allow the driver to set up driver specific data. Tag greater than or equal to queue_depth is for setting up flush request. h](jS)}(h``init_request``h]j9)}(hjh]h init_request}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMshjubjm)}(hhh](h)}(hiCalled for every command allocated by the block layer to allow the driver to set up driver specific data.h]hiCalled for every command allocated by the block layer to allow the driver to set up driver specific data.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMohjubh)}(hITag greater than or equal to queue_depth is for setting up flush request.h]hITag greater than or equal to queue_depth is for setting up flush request.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMrhjubeh}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMshj ubjM)}(h*``exit_request`` Ditto for exit/teardown. h](jS)}(h``exit_request``h]j9)}(hjh]h exit_request}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMxhjubjm)}(hhh]h)}(hDitto for exit/teardown.h]hDitto for exit/teardown.}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj2hMxhj3ubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhj2hMxhj ubjM)}(h}``cleanup_rq`` Called before freeing one request which isn't completed yet, and usually for freeing the driver private data. h](jS)}(h``cleanup_rq``h]j9)}(hjVh]h cleanup_rq}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjTubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjPubjm)}(hhh]h)}(hmCalled before freeing one request which isn't completed yet, and usually for freeing the driver private data.h]hoCalled before freeing one request which isn’t completed yet, and usually for freeing the driver private data.}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM~hjlubah}(h]h ]h"]h$]h&]uh1jlhjPubeh}(h]h ]h"]h$]h&]uh1jLhjkhMhj ubjM)}(hF``busy`` If set, returns whether or not this queue currently is busy. h](jS)}(h``busy``h]j9)}(hjh]hbusy}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjubjm)}(hhh]h)}(hend_io handler. **Return** true when the request was added to the batch, otherwise falseh](h)}(h**Parameters**h]j2)}(hj?h]h Parameters}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj=ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM\hj9ubjH)}(hhh](jM)}(h4``struct request *req`` The request to add to batch h](jS)}(h``struct request *req``h]j9)}(hj^h]hstruct request *req}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj\ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMYhjXubjm)}(hhh]h)}(hThe request to add to batchh]hThe request to add to batch}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjshMYhjtubah}(h]h ]h"]h$]h&]uh1jlhjXubeh}(h]h ]h"]h$]h&]uh1jLhjshMYhjUubjM)}(h;``struct io_comp_batch *iob`` The batch to add the request h](jS)}(h``struct io_comp_batch *iob``h]j9)}(hjh]hstruct io_comp_batch *iob}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMZhjubjm)}(hhh]h)}(hThe batch to add the requesth]hThe batch to add the request}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMZhjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhMZhjUubjM)}(hC``bool is_error`` Specify true if the request failed with an error h](jS)}(h``bool is_error``h]j9)}(hjh]h bool is_error}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM[hjubjm)}(hhh]h)}(h0Specify true if the request failed with an errorh]h0Specify true if the request failed with an error}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhM[hjubah}(h]h ]h"]h$]h&]uh1jlhjubeh}(h]h ]h"]h$]h&]uh1jLhjhM[hjUubjM)}(hU``void (*complete)(struct io_comp_batch *)`` The completaion handler for the request h](jS)}(h,``void (*complete)(struct io_comp_batch *)``h]j9)}(hj h]h(void (*complete)(struct io_comp_batch *)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM\hj ubjm)}(hhh]h)}(h'The completaion handler for the requesth]h'The completaion handler for the request}(hj" hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hM\hj ubah}(h]h ]h"]h$]h&]uh1jlhj ubeh}(h]h ]h"]h$]h&]uh1jLhj hM\hjUubeh}(h]h ]h"]h$]h&]uh1jGhj9ubh)}(h**Description**h]j2)}(hjD h]h Description}(hjF hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjB ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM^hj9ubh)}(hYBatched completions only work when there is no I/O error and no special ->end_io handler.h]hYBatched completions only work when there is no I/O error and no special ->end_io handler.}(hjZ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhM^hj9ubh)}(h **Return**h]j2)}(hjk h]hReturn}(hjm hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hji ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMahj9ubh)}(h=true when the request was added to the batch, otherwise falseh]h=true when the request was added to the batch, otherwise false}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMahj9ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_mq_rq_from_pdu (C function)c.blk_mq_rq_from_pduhNtauh1jhjwhhhNhNubj)}(hhh](j)}(h/struct request * blk_mq_rq_from_pdu (void *pdu)h]j)}(h-struct request *blk_mq_rq_from_pdu(void *pdu)h](j)}(hjh]hstruct}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMubj)}(h h]h }(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhj hMubh)}(hhh]j)}(hrequesth]hrequest}(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj modnameN classnameNjj)}j]j)}jblk_mq_rq_from_pdusbc.blk_mq_rq_from_pduasbuh1hhj hhhj hMubj)}(h h]h }(hj hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj hhhj hMubj )}(hj#h]h*}(hj hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj hhhj hMubj)}(hblk_mq_rq_from_pduh]j)}(hj h]hblk_mq_rq_from_pdu}(hj!hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj !ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj hhhj hMubj)}(h (void *pdu)h]j)}(h void *pduh](j)}(hvoidh]hvoid}(hj*!hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&!ubj)}(h h]h }(hj8!hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&!ubj )}(hj#h]h*}(hjF!hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj&!ubj)}(hpduh]hpdu}(hjS!hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&!ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj"!ubah}(h]h ]h"]h$]h&]hhuh1jhj hhhj hMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj hhhj hMubah}(h]j ah ](jjeh"]h$]h&]jj)jhuh1jhj hMhj hhubj)}(hhh]h)}(hcast a PDU to a requesth]hcast a PDU to a request}(hj}!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjz!hhubah}(h]h ]h"]h$]h&]uh1jhj hhhj hMubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j!j#j!j$j%j&uh1jhhhjwhNhNubj()}(h**Parameters** ``void *pdu`` the PDU (Protocol Data Unit) to be casted **Return** request **Description** Driver command data is immediately after the request. So subtract request size to get back to the original request.h](h)}(h**Parameters**h]j2)}(hj!h]h Parameters}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj!ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj!ubjH)}(hhh]jM)}(h8``void *pdu`` the PDU (Protocol Data Unit) to be casted h](jS)}(h ``void *pdu``h]j9)}(hj!h]h void *pdu}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj!ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj!ubjm)}(hhh]h)}(h)the PDU (Protocol Data Unit) to be castedh]h)the PDU (Protocol Data Unit) to be casted}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj!hMhj!ubah}(h]h ]h"]h$]h&]uh1jlhj!ubeh}(h]h ]h"]h$]h&]uh1jLhj!hMhj!ubah}(h]h ]h"]h$]h&]uh1jGhj!ubh)}(h **Return**h]j2)}(hj!h]hReturn}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj!ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj!ubh)}(hrequesth]hrequest}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj!ubh)}(h**Description**h]j2)}(hj "h]h Description}(hj""hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj"ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj!ubh)}(hsDriver command data is immediately after the request. So subtract request size to get back to the original request.h]hsDriver command data is immediately after the request. So subtract request size to get back to the original request.}(hj6"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj!ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_mq_rq_to_pdu (C function)c.blk_mq_rq_to_pduhNtauh1jhjwhhhNhNubj)}(hhh](j)}(h,void * blk_mq_rq_to_pdu (struct request *rq)h]j)}(h*void *blk_mq_rq_to_pdu(struct request *rq)h](j)}(hvoidh]hvoid}(hje"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhja"hhhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMubj)}(h h]h }(hjt"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhja"hhhjs"hMubj )}(hj#h]h*}(hj"hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhja"hhhjs"hMubj)}(hblk_mq_rq_to_pduh]j)}(hblk_mq_rq_to_pduh]hblk_mq_rq_to_pdu}(hj"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubah}(h]h ](jjeh"]h$]h&]hhuh1jhja"hhhjs"hMubj)}(h(struct request *rq)h]j)}(hstruct request *rqh](j)}(hjh]hstruct}(hj"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubj)}(h h]h }(hj"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubh)}(hhh]j)}(hrequesth]hrequest}(hj"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj"modnameN classnameNjj)}j]j)}jj"sbc.blk_mq_rq_to_pduasbuh1hhj"ubj)}(h h]h }(hj"hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubj )}(hj#h]h*}(hj"hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj"ubj)}(hrqh]hrq}(hj#hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj"ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj"ubah}(h]h ]h"]h$]h&]hhuh1jhja"hhhjs"hMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj]"hhhjs"hMubah}(h]jX"ah ](jjeh"]h$]h&]jj)jhuh1jhjs"hMhjZ"hhubj)}(hhh]h)}(hcast a request to a PDUh]hcast a request to a PDU}(hj2#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhj/#hhubah}(h]h ]h"]h$]h&]uh1jhjZ"hhhjs"hMubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"jJ#j#jJ#j$j%j&uh1jhhhjwhNhNubj()}(h**Parameters** ``struct request *rq`` the request to be casted **Return** pointer to the PDU **Description** Driver command data is immediately after the request. So add request to get the PDU.h](h)}(h**Parameters**h]j2)}(hjT#h]h Parameters}(hjV#hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjR#ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjN#ubjH)}(hhh]jM)}(h0``struct request *rq`` the request to be casted h](jS)}(h``struct request *rq``h]j9)}(hjs#h]hstruct request *rq}(hju#hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjq#ubah}(h]h ]h"]h$]h&]uh1jRhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjm#ubjm)}(hhh]h)}(hthe request to be castedh]hthe request to be casted}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj#hMhj#ubah}(h]h ]h"]h$]h&]uh1jlhjm#ubeh}(h]h ]h"]h$]h&]uh1jLhj#hMhjj#ubah}(h]h ]h"]h$]h&]uh1jGhjN#ubh)}(h **Return**h]j2)}(hj#h]hReturn}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj#ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjN#ubh)}(hpointer to the PDUh]hpointer to the PDU}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjN#ubh)}(h**Description**h]j2)}(hj#h]h Description}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj#ubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjN#ubh)}(hTDriver command data is immediately after the request. So add request to get the PDU.h]hTDriver command data is immediately after the request. So add request to get the PDU.}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhT/var/lib/git/docbuild/linux/Documentation/block/blk-mq:151: ./include/linux/blk-mq.hhMhjN#ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j%blk_mq_wait_quiesce_done (C function)c.blk_mq_wait_quiesce_donehNtauh1jhjwhhhNhNubj)}(hhh](j)}(h:void blk_mq_wait_quiesce_done (struct blk_mq_tag_set *set)h]j)}(h9void blk_mq_wait_quiesce_done(struct blk_mq_tag_set *set)h](j)}(hvoidh]hvoid}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj$hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM"ubj)}(h h]h }(hj)$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj$hhhj($hM"ubj)}(hblk_mq_wait_quiesce_doneh]j)}(hblk_mq_wait_quiesce_doneh]hblk_mq_wait_quiesce_done}(hj;$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7$ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj$hhhj($hM"ubj)}(h(struct blk_mq_tag_set *set)h]j)}(hstruct blk_mq_tag_set *seth](j)}(hjh]hstruct}(hjW$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjS$ubj)}(h h]h }(hjd$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjS$ubh)}(hhh]j)}(hblk_mq_tag_seth]hblk_mq_tag_set}(hju$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjr$ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjw$modnameN classnameNjj)}j]j)}jj=$sbc.blk_mq_wait_quiesce_doneasbuh1hhjS$ubj)}(h h]h }(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjS$ubj )}(hj#h]h*}(hj$hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjS$ubj)}(hseth]hset}(hj$hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjS$ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjO$ubah}(h]h ]h"]h$]h&]hhuh1jhj$hhhj($hM"ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj$hhhj($hM"ubah}(h]j $ah ](jjeh"]h$]h&]jj)jhuh1jhj($hM"hj$hhubj)}(hhh]h)}(h&wait until in-progress quiesce is doneh]h&wait until in-progress quiesce is done}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj$hhubah}(h]h ]h"]h$]h&]uh1jhj$hhhj($hM"ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j$j#j$j$j%j&uh1jhhhjwhNhNubj()}(hXN**Parameters** ``struct blk_mq_tag_set *set`` tag_set to wait on **Note** it is driver's responsibility for making sure that quiesce has been started on or more of the request_queues of the tag_set. This function only waits for the quiesce on those request_queues that had the quiesce flag set using blk_mq_quiesce_queue_nowait.h](h)}(h**Parameters**h]j2)}(hj$h]h Parameters}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj$ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj$ubjH)}(hhh]jM)}(h2``struct blk_mq_tag_set *set`` tag_set to wait on h](jS)}(h``struct blk_mq_tag_set *set``h]j9)}(hj%h]hstruct blk_mq_tag_set *set}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj%ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj%ubjm)}(hhh]h)}(htag_set to wait onh]htag_set to wait on}(hj4%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0%hMhj1%ubah}(h]h ]h"]h$]h&]uh1jlhj%ubeh}(h]h ]h"]h$]h&]uh1jLhj0%hMhj%ubah}(h]h ]h"]h$]h&]uh1jGhj$ubh)}(h**Note**h]j2)}(hjV%h]hNote}(hjX%hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjT%ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj$ubh)}(hit is driver's responsibility for making sure that quiesce has been started on or more of the request_queues of the tag_set. This function only waits for the quiesce on those request_queues that had the quiesce flag set using blk_mq_quiesce_queue_nowait.h]hXit is driver’s responsibility for making sure that quiesce has been started on or more of the request_queues of the tag_set. This function only waits for the quiesce on those request_queues that had the quiesce flag set using blk_mq_quiesce_queue_nowait.}(hjl%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj$ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!blk_mq_quiesce_queue (C function)c.blk_mq_quiesce_queuehNtauh1jhjwhhhNhNubj)}(hhh](j)}(h3void blk_mq_quiesce_queue (struct request_queue *q)h]j)}(h2void blk_mq_quiesce_queue(struct request_queue *q)h](j)}(hvoidh]hvoid}(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM4ubj)}(h h]h }(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%hhhj%hM4ubj)}(hblk_mq_quiesce_queueh]j)}(hblk_mq_quiesce_queueh]hblk_mq_quiesce_queue}(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj%hhhj%hM4ubj)}(h(struct request_queue *q)h]j)}(hstruct request_queue *qh](j)}(hjh]hstruct}(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%ubj)}(h h]h }(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%ubh)}(hhh]j)}(h request_queueh]h request_queue}(hj%hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj%modnameN classnameNjj)}j]j)}jj%sbc.blk_mq_quiesce_queueasbuh1hhj%ubj)}(h h]h }(hj&hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%ubj )}(hj#h]h*}(hj$&hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj%ubj)}(hqh]hq}(hj1&hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj%ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj%ubah}(h]h ]h"]h$]h&]hhuh1jhj%hhhj%hM4ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj%hhhj%hM4ubah}(h]j%ah ](jjeh"]h$]h&]jj)jhuh1jhj%hM4hj%hhubj)}(hhh]h)}(h/wait until all ongoing dispatches have finishedh]h/wait until all ongoing dispatches have finished}(hj[&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM,hjX&hhubah}(h]h ]h"]h$]h&]uh1jhj%hhhj%hM4ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"js&j#js&j$j%j&uh1jhhhjwhNhNubj()}(hX%**Parameters** ``struct request_queue *q`` request queue. **Note** this function does not prevent that the struct request end_io() callback function is invoked. Once this function is returned, we make sure no dispatch can happen until the queue is unquiesced via blk_mq_unquiesce_queue().h](h)}(h**Parameters**h]j2)}(hj}&h]h Parameters}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj{&ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM0hjw&ubjH)}(hhh]jM)}(h+``struct request_queue *q`` request queue. h](jS)}(h``struct request_queue *q``h]j9)}(hj&h]hstruct request_queue *q}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj&ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM-hj&ubjm)}(hhh]h)}(hrequest queue.h]hrequest queue.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj&hM-hj&ubah}(h]h ]h"]h$]h&]uh1jlhj&ubeh}(h]h ]h"]h$]h&]uh1jLhj&hM-hj&ubah}(h]h ]h"]h$]h&]uh1jGhjw&ubh)}(h**Note**h]j2)}(hj&h]hNote}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj&ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM/hjw&ubh)}(hthis function does not prevent that the struct request end_io() callback function is invoked. Once this function is returned, we make sure no dispatch can happen until the queue is unquiesced via blk_mq_unquiesce_queue().h]hthis function does not prevent that the struct request end_io() callback function is invoked. Once this function is returned, we make sure no dispatch can happen until the queue is unquiesced via blk_mq_unquiesce_queue().}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM/hjw&ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_update_request (C function)c.blk_update_requesthNtauh1jhjwhhhNhNubj)}(hhh](j)}(hXbool blk_update_request (struct request *req, blk_status_t error, unsigned int nr_bytes)h]j)}(hWbool blk_update_request(struct request *req, blk_status_t error, unsigned int nr_bytes)h](j)}(hjh]hbool}(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj'hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMubj)}(h h]h }(hj*'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj'hhhj)'hMubj)}(hblk_update_requesth]j)}(hblk_update_requesth]hblk_update_request}(hj<'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8'ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj'hhhj)'hMubj)}(h@(struct request *req, blk_status_t error, unsigned int nr_bytes)h](j)}(hstruct request *reqh](j)}(hjh]hstruct}(hjX'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjT'ubj)}(h h]h }(hje'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjT'ubh)}(hhh]j)}(hrequesth]hrequest}(hjv'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjs'ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjx'modnameN classnameNjj)}j]j)}jj>'sbc.blk_update_requestasbuh1hhjT'ubj)}(h h]h }(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjT'ubj )}(hj#h]h*}(hj'hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjT'ubj)}(hreqh]hreq}(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjT'ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjP'ubj)}(hblk_status_t errorh](h)}(hhh]j)}(h blk_status_th]h blk_status_t}(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj'ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj'modnameN classnameNjj)}j]j'c.blk_update_requestasbuh1hhj'ubj)}(h h]h }(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj'ubj)}(herrorh]herror}(hj'hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj'ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjP'ubj)}(hunsigned int nr_bytesh](j)}(hunsignedh]hunsigned}(hj(hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(ubj)}(h h]h }(hj (hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(ubj)}(hinth]hint}(hj.(hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(ubj)}(h h]h }(hj<(hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(ubj)}(hnr_bytesh]hnr_bytes}(hjJ(hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj(ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjP'ubeh}(h]h ]h"]h$]h&]hhuh1jhj'hhhj)'hMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj'hhhj)'hMubah}(h]j'ah ](jjeh"]h$]h&]jj)jhuh1jhj)'hMhj'hhubj)}(hhh]h)}(h6Complete multiple bytes without completing the requesth]h6Complete multiple bytes without completing the request}(hjt(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjq(hhubah}(h]h ]h"]h$]h&]uh1jhj'hhhj)'hMubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j(j#j(j$j%j&uh1jhhhjwhNhNubj()}(hX**Parameters** ``struct request *req`` the request being processed ``blk_status_t error`` block status code ``unsigned int nr_bytes`` number of bytes to complete for **req** **Description** Ends I/O on a number of bytes attached to **req**, but doesn't complete the request structure even if **req** doesn't have leftover. If **req** has leftover, sets it up for the next range of segments. Passing the result of blk_rq_bytes() as **nr_bytes** guarantees ``false`` return from this function. **Note** The RQF_SPECIAL_PAYLOAD flag is ignored on purpose in this function except in the consistency check at the end of this function. **Return** ``false`` - this request doesn't have any more data ``true`` - this request has more datah](h)}(h**Parameters**h]j2)}(hj(h]h Parameters}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj(ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj(ubjH)}(hhh](jM)}(h4``struct request *req`` the request being processed h](jS)}(h``struct request *req``h]j9)}(hj(h]hstruct request *req}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj(ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj(ubjm)}(hhh]h)}(hthe request being processedh]hthe request being processed}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj(hMhj(ubah}(h]h ]h"]h$]h&]uh1jlhj(ubeh}(h]h ]h"]h$]h&]uh1jLhj(hMhj(ubjM)}(h)``blk_status_t error`` block status code h](jS)}(h``blk_status_t error``h]j9)}(hj(h]hblk_status_t error}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj(ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj(ubjm)}(hhh]h)}(hblock status codeh]hblock status code}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj)hMhj)ubah}(h]h ]h"]h$]h&]uh1jlhj(ubeh}(h]h ]h"]h$]h&]uh1jLhj)hMhj(ubjM)}(hB``unsigned int nr_bytes`` number of bytes to complete for **req** h](jS)}(h``unsigned int nr_bytes``h]j9)}(hj')h]hunsigned int nr_bytes}(hj))hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj%)ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj!)ubjm)}(hhh]h)}(h'number of bytes to complete for **req**h](h number of bytes to complete for }(hj@)hhhNhNubj2)}(h**req**h]hreq}(hjH)hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj@)ubeh}(h]h ]h"]h$]h&]uh1hhj<)hMhj=)ubah}(h]h ]h"]h$]h&]uh1jlhj!)ubeh}(h]h ]h"]h$]h&]uh1jLhj<)hMhj(ubeh}(h]h ]h"]h$]h&]uh1jGhj(ubh)}(h**Description**h]j2)}(hjp)h]h Description}(hjr)hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjn)ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj(ubh block_quote)}(hX/Ends I/O on a number of bytes attached to **req**, but doesn't complete the request structure even if **req** doesn't have leftover. If **req** has leftover, sets it up for the next range of segments. Passing the result of blk_rq_bytes() as **nr_bytes** guarantees ``false`` return from this function. h](h)}(hEnds I/O on a number of bytes attached to **req**, but doesn't complete the request structure even if **req** doesn't have leftover. If **req** has leftover, sets it up for the next range of segments.h](h*Ends I/O on a number of bytes attached to }(hj)hhhNhNubj2)}(h**req**h]hreq}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj)ubh7, but doesn’t complete the request structure even if }(hj)hhhNhNubj2)}(h**req**h]hreq}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj)ubh doesn’t have leftover. If }(hj)hhhNhNubj2)}(h**req**h]hreq}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj)ubh9 has leftover, sets it up for the next range of segments.}(hj)hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj)ubh)}(hdPassing the result of blk_rq_bytes() as **nr_bytes** guarantees ``false`` return from this function.h](h(Passing the result of blk_rq_bytes() as }(hj)hhhNhNubj2)}(h **nr_bytes**h]hnr_bytes}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj)ubh guarantees }(hj)hhhNhNubj9)}(h ``false``h]hfalse}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj)ubh return from this function.}(hj)hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj)ubeh}(h]h ]h"]h$]h&]uh1j)hj)hMhj(ubh)}(h**Note**h]j2)}(hj *h]hNote}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj *ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj(ubj))}(hThe RQF_SPECIAL_PAYLOAD flag is ignored on purpose in this function except in the consistency check at the end of this function. h]h)}(hThe RQF_SPECIAL_PAYLOAD flag is ignored on purpose in this function except in the consistency check at the end of this function.h]hThe RQF_SPECIAL_PAYLOAD flag is ignored on purpose in this function except in the consistency check at the end of this function.}(hj&*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj"*ubah}(h]h ]h"]h$]h&]uh1j)hj4*hMhj(ubh)}(h **Return**h]j2)}(hj=*h]hReturn}(hj?*hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj;*ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj(ubj))}(hZ``false`` - this request doesn't have any more data ``true`` - this request has more datah]h)}(hZ``false`` - this request doesn't have any more data ``true`` - this request has more datah](j9)}(h ``false``h]hfalse}(hj[*hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjW*ubh- - this request doesn’t have any more data }(hjW*hhhNhNubj9)}(h``true``h]htrue}(hjm*hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjW*ubh - this request has more data}(hjW*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjS*ubah}(h]h ]h"]h$]h&]uh1j)hj*hMhj(ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j$blk_mq_complete_request (C function)c.blk_mq_complete_requesthNtauh1jhjwhhhNhNubj)}(hhh](j)}(h1void blk_mq_complete_request (struct request *rq)h]j)}(h0void blk_mq_complete_request(struct request *rq)h](j)}(hvoidh]hvoid}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM(ubj)}(h h]h }(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*hhhj*hM(ubj)}(hblk_mq_complete_requesth]j)}(hblk_mq_complete_requesth]hblk_mq_complete_request}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj*hhhj*hM(ubj)}(h(struct request *rq)h]j)}(hstruct request *rqh](j)}(hjh]hstruct}(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubj)}(h h]h }(hj*hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubh)}(hhh]j)}(hrequesth]hrequest}(hj+hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj+ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj +modnameN classnameNjj)}j]j)}jj*sbc.blk_mq_complete_requestasbuh1hhj*ubj)}(h h]h }(hj'+hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubj )}(hj#h]h*}(hj5+hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj*ubj)}(hrqh]hrq}(hjB+hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj*ubah}(h]h ]h"]h$]h&]hhuh1jhj*hhhj*hM(ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj*hhhj*hM(ubah}(h]j*ah ](jjeh"]h$]h&]jj)jhuh1jhj*hM(hj*hhubj)}(hhh]h)}(hend I/O on a requesth]hend I/O on a request}(hjl+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM"hji+hhubah}(h]h ]h"]h$]h&]uh1jhj*hhhj*hM(ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j+j#j+j$j%j&uh1jhhhjwhNhNubj()}(h**Parameters** ``struct request *rq`` the request being processed **Description** Complete a request by scheduling the ->complete_rq operation.h](h)}(h**Parameters**h]j2)}(hj+h]h Parameters}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj+ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM&hj+ubjH)}(hhh]jM)}(h3``struct request *rq`` the request being processed h](jS)}(h``struct request *rq``h]j9)}(hj+h]hstruct request *rq}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj+ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM#hj+ubjm)}(hhh]h)}(hthe request being processedh]hthe request being processed}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj+hM#hj+ubah}(h]h ]h"]h$]h&]uh1jlhj+ubeh}(h]h ]h"]h$]h&]uh1jLhj+hM#hj+ubah}(h]h ]h"]h$]h&]uh1jGhj+ubh)}(h**Description**h]j2)}(hj+h]h Description}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj+ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM%hj+ubj))}(h=Complete a request by scheduling the ->complete_rq operation.h]h)}(hj,h]h=Complete a request by scheduling the ->complete_rq operation.}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM%hj+ubah}(h]h ]h"]h$]h&]uh1j)hj,hM%hj+ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!blk_mq_start_request (C function)c.blk_mq_start_requesthNtauh1jhjwhhhNhNubj)}(hhh](j)}(h.void blk_mq_start_request (struct request *rq)h]j)}(h-void blk_mq_start_request(struct request *rq)h](j)}(hvoidh]hvoid}(hj6,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2,hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM7ubj)}(h h]h }(hjE,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2,hhhjD,hM7ubj)}(hblk_mq_start_requesth]j)}(hblk_mq_start_requesth]hblk_mq_start_request}(hjW,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjS,ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj2,hhhjD,hM7ubj)}(h(struct request *rq)h]j)}(hstruct request *rqh](j)}(hjh]hstruct}(hjs,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjo,ubj)}(h h]h }(hj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjo,ubh)}(hhh]j)}(hrequesth]hrequest}(hj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj,ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj,modnameN classnameNjj)}j]j)}jjY,sbc.blk_mq_start_requestasbuh1hhjo,ubj)}(h h]h }(hj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjo,ubj )}(hj#h]h*}(hj,hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjo,ubj)}(hrqh]hrq}(hj,hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjo,ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjk,ubah}(h]h ]h"]h$]h&]hhuh1jhj2,hhhjD,hM7ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj.,hhhjD,hM7ubah}(h]j),ah ](jjeh"]h$]h&]jj)jhuh1jhjD,hM7hj+,hhubj)}(hhh]h)}(hStart processing a requesth]hStart processing a request}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM0hj,hhubah}(h]h ]h"]h$]h&]uh1jhj+,hhhjD,hM7ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j-j#j-j$j%j&uh1jhhhjwhNhNubj()}(hX**Parameters** ``struct request *rq`` Pointer to request to be started **Description** Function used by device drivers to notify the block layer that a request is going to be processed now, so blk layer can do proper initializations such as starting the timeout timer.h](h)}(h**Parameters**h]j2)}(hj-h]h Parameters}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj-ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM4hj-ubjH)}(hhh]jM)}(h8``struct request *rq`` Pointer to request to be started h](jS)}(h``struct request *rq``h]j9)}(hj7-h]hstruct request *rq}(hj9-hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj5-ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM1hj1-ubjm)}(hhh]h)}(h Pointer to request to be startedh]h Pointer to request to be started}(hjP-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjL-hM1hjM-ubah}(h]h ]h"]h$]h&]uh1jlhj1-ubeh}(h]h ]h"]h$]h&]uh1jLhjL-hM1hj.-ubah}(h]h ]h"]h$]h&]uh1jGhj-ubh)}(h**Description**h]j2)}(hjr-h]h Description}(hjt-hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjp-ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM3hj-ubh)}(hFunction used by device drivers to notify the block layer that a request is going to be processed now, so blk layer can do proper initializations such as starting the timeout timer.h]hFunction used by device drivers to notify the block layer that a request is going to be processed now, so blk layer can do proper initializations such as starting the timeout timer.}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM3hj-ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j"blk_execute_rq_nowait (C function)c.blk_execute_rq_nowaithNtauh1jhjwhhhNhNubj)}(hhh](j)}(h=void blk_execute_rq_nowait (struct request *rq, bool at_head)h]j)}(h/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:/hM|hj;/ubah}(h]h ]h"]h$]h&]uh1jlhj/ubeh}(h]h ]h"]h$]h&]uh1jLhj:/hM|hj.ubeh}(h]h ]h"]h$]h&]uh1jGhj.ubh)}(h**Description**h]j2)}(hj`/h]h Description}(hjb/hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj^/ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM~hj.ubj))}(hrInsert a fully prepared request at the back of the I/O scheduler queue for execution. Don't wait for completion. h]h)}(hqInsert a fully prepared request at the back of the I/O scheduler queue for execution. Don't wait for completion.h]hsInsert a fully prepared request at the back of the I/O scheduler queue for execution. Don’t wait for completion.}(hjz/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM~hjv/ubah}(h]h ]h"]h$]h&]uh1j)hj/hM~hj.ubh)}(h**Note**h]j2)}(hj/h]hNote}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj/ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj.ubj))}(hAThis function will invoke **done** directly if the queue is dead.h]h)}(hj/h](hThis function will invoke }(hj/hhhNhNubj2)}(h**done**h]hdone}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj/ubh directly if the queue is dead.}(hj/hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj/ubah}(h]h ]h"]h$]h&]uh1j)hj/hMhj.ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_execute_rq (C function)c.blk_execute_rqhNtauh1jhjwhhhNhNubj)}(hhh](j)}(h>blk_status_t blk_execute_rq (struct request *rq, bool at_head)h]j)}(h=blk_status_t blk_execute_rq(struct request *rq, bool at_head)h](h)}(hhh]j)}(h blk_status_th]h blk_status_t}(hj/hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj/modnameN classnameNjj)}j]j)}jblk_execute_rqsbc.blk_execute_rqasbuh1hhj/hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMubj)}(h h]h }(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj/hhhj0hMubj)}(hblk_execute_rqh]j)}(hj0h]hblk_execute_rq}(hj(0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj$0ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj/hhhj0hMubj)}(h"(struct request *rq, bool at_head)h](j)}(hstruct request *rqh](j)}(hjh]hstruct}(hjC0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?0ubj)}(h h]h }(hjP0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?0ubh)}(hhh]j)}(hrequesth]hrequest}(hja0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj^0ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjc0modnameN classnameNjj)}j]j0c.blk_execute_rqasbuh1hhj?0ubj)}(h h]h }(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?0ubj )}(hj#h]h*}(hj0hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj?0ubj)}(hrqh]hrq}(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?0ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj;0ubj)}(h bool at_headh](j)}(hjh]hbool}(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0ubj)}(h h]h }(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0ubj)}(hat_headh]hat_head}(hj0hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj0ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj;0ubeh}(h]h ]h"]h$]h&]hhuh1jhj/hhhj0hMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj/hhhj0hMubah}(h]j/ah ](jjeh"]h$]h&]jj)jhuh1jhj0hMhj/hhubj)}(hhh]h)}(h)insert a request into queue for executionh]h)insert a request into queue for execution}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj0hhubah}(h]h ]h"]h$]h&]uh1jhj/hhhj0hMubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j1j#j1j$j%j&uh1jhhhjwhNhNubj()}(hXC**Parameters** ``struct request *rq`` request to insert ``bool at_head`` insert request at head or tail of queue **Description** Insert a fully prepared request at the back of the I/O scheduler queue for execution and wait for completion. **Return** The blk_status_t result provided to blk_mq_end_request().h](h)}(h**Parameters**h]j2)}(hj1h]h Parameters}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj1ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj1ubjH)}(hhh](jM)}(h)``struct request *rq`` request to insert h](jS)}(h``struct request *rq``h]j9)}(hj91h]hstruct request *rq}(hj;1hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj71ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj31ubjm)}(hhh]h)}(hrequest to inserth]hrequest to insert}(hjR1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjN1hMhjO1ubah}(h]h ]h"]h$]h&]uh1jlhj31ubeh}(h]h ]h"]h$]h&]uh1jLhjN1hMhj01ubjM)}(h9``bool at_head`` insert request at head or tail of queue h](jS)}(h``bool at_head``h]j9)}(hjr1h]h bool at_head}(hjt1hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjp1ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjl1ubjm)}(hhh]h)}(h'insert request at head or tail of queueh]h'insert request at head or tail of queue}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj1hMhj1ubah}(h]h ]h"]h$]h&]uh1jlhjl1ubeh}(h]h ]h"]h$]h&]uh1jLhj1hMhj01ubeh}(h]h ]h"]h$]h&]uh1jGhj1ubh)}(h**Description**h]j2)}(hj1h]h Description}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj1ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj1ubj))}(hnInsert a fully prepared request at the back of the I/O scheduler queue for execution and wait for completion. h]h)}(hmInsert a fully prepared request at the back of the I/O scheduler queue for execution and wait for completion.h]hmInsert a fully prepared request at the back of the I/O scheduler queue for execution and wait for completion.}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj1ubah}(h]h ]h"]h$]h&]uh1j)hj1hMhj1ubh)}(h **Return**h]j2)}(hj1h]hReturn}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj1ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj1ubh)}(h9The blk_status_t result provided to blk_mq_end_request().h]h9The blk_status_t result provided to blk_mq_end_request().}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj1ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&blk_mq_delay_run_hw_queue (C function)c.blk_mq_delay_run_hw_queuehNtauh1jhjwhhhNhNubj)}(hhh](j)}(hPvoid blk_mq_delay_run_hw_queue (struct blk_mq_hw_ctx *hctx, unsigned long msecs)h]j)}(hOvoid blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)h](j)}(hvoidh]hvoid}(hj#2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMubj)}(h h]h }(hj22hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2hhhj12hMubj)}(hblk_mq_delay_run_hw_queueh]j)}(hblk_mq_delay_run_hw_queueh]hblk_mq_delay_run_hw_queue}(hjD2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@2ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj2hhhj12hMubj)}(h1(struct blk_mq_hw_ctx *hctx, unsigned long msecs)h](j)}(hstruct blk_mq_hw_ctx *hctxh](j)}(hjh]hstruct}(hj`2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\2ubj)}(h h]h }(hjm2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\2ubh)}(hhh]j)}(h blk_mq_hw_ctxh]h blk_mq_hw_ctx}(hj~2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj{2ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj2modnameN classnameNjj)}j]j)}jjF2sbc.blk_mq_delay_run_hw_queueasbuh1hhj\2ubj)}(h h]h }(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\2ubj )}(hj#h]h*}(hj2hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj\2ubj)}(hhctxh]hhctx}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj\2ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjX2ubj)}(hunsigned long msecsh](j)}(hunsignedh]hunsigned}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(h h]h }(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(hlongh]hlong}(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(h h]h }(hj2hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubj)}(hmsecsh]hmsecs}(hj 3hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj2ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjX2ubeh}(h]h ]h"]h$]h&]hhuh1jhj2hhhj12hMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj2hhhj12hMubah}(h]j2ah ](jjeh"]h$]h&]jj)jhuh1jhj12hMhj2hhubj)}(hhh]h)}(h$Run a hardware queue asynchronously.h]h$Run a hardware queue asynchronously.}(hj43hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj13hhubah}(h]h ]h"]h$]h&]uh1jhj2hhhj12hMubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"jL3j#jL3j$j%j&uh1jhhhjwhNhNubj()}(h**Parameters** ``struct blk_mq_hw_ctx *hctx`` Pointer to the hardware queue to run. ``unsigned long msecs`` Milliseconds of delay to wait before running the queue. **Description** Run a hardware queue asynchronously with a delay of **msecs**.h](h)}(h**Parameters**h]j2)}(hjV3h]h Parameters}(hjX3hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjT3ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjP3ubjH)}(hhh](jM)}(hE``struct blk_mq_hw_ctx *hctx`` Pointer to the hardware queue to run. h](jS)}(h``struct blk_mq_hw_ctx *hctx``h]j9)}(hju3h]hstruct blk_mq_hw_ctx *hctx}(hjw3hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjs3ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjo3ubjm)}(hhh]h)}(h%Pointer to the hardware queue to run.h]h%Pointer to the hardware queue to run.}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj3hMhj3ubah}(h]h ]h"]h$]h&]uh1jlhjo3ubeh}(h]h ]h"]h$]h&]uh1jLhj3hMhjl3ubjM)}(hP``unsigned long msecs`` Milliseconds of delay to wait before running the queue. h](jS)}(h``unsigned long msecs``h]j9)}(hj3h]hunsigned long msecs}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj3ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhj3ubjm)}(hhh]h)}(h7Milliseconds of delay to wait before running the queue.h]h7Milliseconds of delay to wait before running the queue.}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj3hMhj3ubah}(h]h ]h"]h$]h&]uh1jlhj3ubeh}(h]h ]h"]h$]h&]uh1jLhj3hMhjl3ubeh}(h]h ]h"]h$]h&]uh1jGhjP3ubh)}(h**Description**h]j2)}(hj3h]h Description}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj3ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjP3ubh)}(h>Run a hardware queue asynchronously with a delay of **msecs**.h](h4Run a hardware queue asynchronously with a delay of }(hj3hhhNhNubj2)}(h **msecs**h]hmsecs}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj3ubh.}(hj3hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjP3ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j blk_mq_run_hw_queue (C function)c.blk_mq_run_hw_queuehNtauh1jhjwhhhNhNubj)}(hhh](j)}(hAvoid blk_mq_run_hw_queue (struct blk_mq_hw_ctx *hctx, bool async)h]j)}(h@void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)h](j)}(hvoidh]hvoid}(hj@4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<4hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM ubj)}(h h]h }(hjO4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<4hhhjN4hM ubj)}(hblk_mq_run_hw_queueh]j)}(hblk_mq_run_hw_queueh]hblk_mq_run_hw_queue}(hja4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]4ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj<4hhhjN4hM ubj)}(h((struct blk_mq_hw_ctx *hctx, bool async)h](j)}(hstruct blk_mq_hw_ctx *hctxh](j)}(hjh]hstruct}(hj}4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjy4ubj)}(h h]h }(hj4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjy4ubh)}(hhh]j)}(h blk_mq_hw_ctxh]h blk_mq_hw_ctx}(hj4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj4modnameN classnameNjj)}j]j)}jjc4sbc.blk_mq_run_hw_queueasbuh1hhjy4ubj)}(h h]h }(hj4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjy4ubj )}(hj#h]h*}(hj4hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjy4ubj)}(hhctxh]hhctx}(hj4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjy4ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhju4ubj)}(h bool asynch](j)}(hjh]hbool}(hj4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj4ubj)}(h h]h }(hj4hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj4ubj)}(hasynch]hasync}(hj 5hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj4ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhju4ubeh}(h]h ]h"]h$]h&]hhuh1jhj<4hhhjN4hM ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj84hhhjN4hM ubah}(h]j34ah ](jjeh"]h$]h&]jj)jhuh1jhjN4hM hj54hhubj)}(hhh]h)}(hStart to run a hardware queue.h]hStart to run a hardware queue.}(hj45hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj15hhubah}(h]h ]h"]h$]h&]uh1jhj54hhhjN4hM ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"jL5j#jL5j$j%j&uh1jhhhjwhNhNubj()}(hXG**Parameters** ``struct blk_mq_hw_ctx *hctx`` Pointer to the hardware queue to run. ``bool async`` If we want to run the queue asynchronously. **Description** Check if the request queue is not in a quiesced state and if there are pending requests to be sent. If this is true, run the queue to send requests to hardware.h](h)}(h**Parameters**h]j2)}(hjV5h]h Parameters}(hjX5hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjT5ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjP5ubjH)}(hhh](jM)}(hE``struct blk_mq_hw_ctx *hctx`` Pointer to the hardware queue to run. h](jS)}(h``struct blk_mq_hw_ctx *hctx``h]j9)}(hju5h]hstruct blk_mq_hw_ctx *hctx}(hjw5hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjs5ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjo5ubjm)}(hhh]h)}(h%Pointer to the hardware queue to run.h]h%Pointer to the hardware queue to run.}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj5hM hj5ubah}(h]h ]h"]h$]h&]uh1jlhjo5ubeh}(h]h ]h"]h$]h&]uh1jLhj5hM hjl5ubjMf)}(h;``bool async`` If we want to run the queue asynchronously. h](jS)}(h``bool async``h]j9)}(hj5h]h bool async}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj5ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj5ubjm)}(hhh]h)}(h+If we want to run the queue asynchronously.h]h+If we want to run the queue asynchronously.}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj5hM hj5ubah}(h]h ]h"]h$]h&]uh1jlhj5ubeh}(h]h ]h"]h$]h&]uh1jLhj5hM hjl5ubeh}(h]h ]h"]h$]h&]uh1jGhjP5ubh)}(h**Description**h]j2)}(hj5h]h Description}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj5ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjP5ubh)}(hCheck if the request queue is not in a quiesced state and if there are pending requests to be sent. If this is true, run the queue to send requests to hardware.h]hCheck if the request queue is not in a quiesced state and if there are pending requests to be sent. If this is true, run the queue to send requests to hardware.}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjP5ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!blk_mq_run_hw_queues (C function)c.blk_mq_run_hw_queueshNtauh1jhjwhhhNhNubj)}(hhh](j)}(h?void blk_mq_run_hw_queues (struct request_queue *q, bool async)h]j)}(h>void blk_mq_run_hw_queues(struct request_queue *q, bool async)h](j)}(hvoidh]hvoid}(hj.6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*6hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMQ ubj)}(h h]h }(hj=6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj*6hhhj<6hMQ ubj)}(hblk_mq_run_hw_queuesh]j)}(hblk_mq_run_hw_queuesh]hblk_mq_run_hw_queues}(hjO6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjK6ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj*6hhhj<6hMQ ubj)}(h%(struct request_queue *q, bool async)h](j)}(hstruct request_queue *qh](j)}(hjh]hstruct}(hjk6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjg6ubj)}(h h]h }(hjx6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjg6ubh)}(hhh]j)}(h request_queueh]h request_queue}(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj6ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj6modnameN classnameNjj)}j]j)}jjQ6sbc.blk_mq_run_hw_queuesasbuh1hhjg6ubj)}(h h]h }(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjg6ubj )}(hj#h]h*}(hj6hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjg6ubj)}(hj3&h]hq}(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjg6ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjc6ubj)}(h bool asynch](j)}(hjh]hbool}(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj6ubj)}(h h]h }(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj6ubj)}(hasynch]hasync}(hj6hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj6ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjc6ubeh}(h]h ]h"]h$]h&]hhuh1jhj*6hhhj<6hMQ ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj&6hhhj<6hMQ ubah}(h]j!6ah ](jjeh"]h$]h&]jj)jhuh1jhj<6hMQ hj#6hhubj)}(hhh]h)}(h+Run all hardware queues in a request queue.h]h+Run all hardware queues in a request queue.}(hj!7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMM hj7hhubah}(h]h ]h"]h$]h&]uh1jhj#6hhhj<6hMQ ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j97j#j97j$j%j&uh1jhhhjwhNhNubj()}(h**Parameters** ``struct request_queue *q`` Pointer to the request queue to run. ``bool async`` If we want to run the queue asynchronously.h](h)}(h**Parameters**h]j2)}(hjC7h]h Parameters}(hjE7hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjA7ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMQ hj=7ubjH)}(hhh](jM)}(hA``struct request_queue *q`` Pointer to the request queue to run. h](jS)}(h``struct request_queue *q``h]j9)}(hjb7h]hstruct request_queue *q}(hjd7hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj`7ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMN hj\7ubjm)}(hhh]h)}(h$Pointer to the request queue to run.h]h$Pointer to the request queue to run.}(hj{7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjw7hMN hjx7ubah}(h]h ]h"]h$]h&]uh1jlhj\7ubeh}(h]h ]h"]h$]h&]uh1jLhjw7hMN hjY7ubjM)}(h:``bool async`` If we want to run the queue asynchronously.h](jS)}(h``bool async``h]j9)}(hj7h]h bool async}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj7ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMP hj7ubjm)}(hhh]h)}(h+If we want to run the queue asynchronously.h]h+If we want to run the queue asynchronously.}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMO hj7ubah}(h]h ]h"]h$]h&]uh1jlhj7ubeh}(h]h ]h"]h$]h&]uh1jLhj7hMP hjY7ubeh}(h]h ]h"]h$]h&]uh1jGhj=7ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j'blk_mq_delay_run_hw_queues (C function)c.blk_mq_delay_run_hw_queueshNtauh1jhjwhhhNhNubj)}(hhh](j)}(hNvoid blk_mq_delay_run_hw_queues (struct request_queue *q, unsigned long msecs)h]j)}(hMvoid blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs)h](j)}(hvoidh]hvoid}(hj7hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMm ubj)}(h h]h }(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj7hhhj8hMm ubj)}(hblk_mq_delay_run_hw_queuesh]j)}(hblk_mq_delay_run_hw_queuesh]hblk_mq_delay_run_hw_queues}(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj7hhhj8hMm ubj)}(h.(struct request_queue *q, unsigned long msecs)h](j)}(hstruct request_queue *qh](j)}(hjh]hstruct}(hj28hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.8ubj)}(h h]h }(hj?8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.8ubh)}(hhh]j)}(h request_queueh]h request_queue}(hjP8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjM8ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjR8modnameN classnameNjj)}j]j)}jj8sbc.blk_mq_delay_run_hw_queuesasbuh1hhj.8ubj)}(h h]h }(hjp8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.8ubj )}(hj#h]h*}(hj~8hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj.8ubj)}(hj3&h]hq}(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj.8ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj*8ubj)}(hunsigned long msecsh](j)}(hunsignedh]hunsigned}(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubj)}(h h]h }(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubj)}(hlongh]hlong}(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubj)}(h h]h }(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubj)}(hmsecsh]hmsecs}(hj8hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj8ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj*8ubeh}(h]h ]h"]h$]h&]hhuh1jhj7hhhj8hMm ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj7hhhj8hMm ubah}(h]j7ah ](jjeh"]h$]h&]jj)jhuh1jhj8hMm hj7hhubj)}(hhh]h)}(h'Run all hardware queues asynchronously.h]h'Run all hardware queues asynchronously.}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMi hj9hhubah}(h]h ]h"]h$]h&]uh1jhj7hhhj8hMm ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j9j#j9j$j%j&uh1jhhhjwhNhNubj()}(h**Parameters** ``struct request_queue *q`` Pointer to the request queue to run. ``unsigned long msecs`` Milliseconds of delay to wait before running the queues.h](h)}(h**Parameters**h]j2)}(hj'9h]h Parameters}(hj)9hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj%9ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMm hj!9ubjH)}(hhh](jM)}(hA``struct request_queue *q`` Pointer to the request queue to run. h](jS)}(h``struct request_queue *q``h]j9)}(hjF9h]hstruct request_queue *q}(hjH9hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjD9ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMj hj@9ubjm)}(hhh]h)}(h$Pointer to the request queue to run.h]h$Pointer to the request queue to run.}(hj_9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj[9hMj hj\9ubah}(h]h ]h"]h$]h&]uh1jlhj@9ubeh}(h]h ]h"]h$]h&]uh1jLhj[9hMj hj=9ubjM)}(hP``unsigned long msecs`` Milliseconds of delay to wait before running the queues.h](jS)}(h``unsigned long msecs``h]j9)}(hj9h]hunsigned long msecs}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj}9ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMl hjy9ubjm)}(hhh]h)}(h8Milliseconds of delay to wait before running the queues.h]h8Milliseconds of delay to wait before running the queues.}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMk hj9ubah}(h]h ]h"]h$]h&]uh1jlhjy9ubeh}(h]h ]h"]h$]h&]uh1jLhj9hMl hj=9ubeh}(h]h ]h"]h$]h&]uh1jGhj!9ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j)blk_mq_request_bypass_insert (C function)c.blk_mq_request_bypass_inserthNtauh1jhjwhhhNhNubj)}(hhh](j)}(hJvoid blk_mq_request_bypass_insert (struct request *rq, blk_insert_t flags)h]j)}(hIvoid blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags)h](j)}(hvoidh]hvoid}(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj9hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM ubj)}(h h]h }(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj9hhhj9hM ubj)}(hblk_mq_request_bypass_inserth]j)}(hblk_mq_request_bypass_inserth]hblk_mq_request_bypass_insert}(hj9hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj9ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj9hhhj9hM ubj)}(h((struct request *rq, blk_insert_t flags)h](j)}(hstruct request *rqh](j)}(hjh]hstruct}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubj)}(h h]h }(hj#:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubh)}(hhh]j)}(hrequesth]hrequest}(hj4:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj1:ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj6:modnameN classnameNjj)}j]j)}jj9sbc.blk_mq_request_bypass_insertasbuh1hhj:ubj)}(h h]h }(hjT:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubj )}(hj#h]h*}(hjb:hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj:ubj)}(hrqh]hrq}(hjo:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj:ubj)}(hblk_insert_t flagsh](h)}(hhh]j)}(h blk_insert_th]h blk_insert_t}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj:modnameN classnameNjj)}j]jP:c.blk_mq_request_bypass_insertasbuh1hhj:ubj)}(h h]h }(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubj)}(hflagsh]hflags}(hj:hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj:ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj:ubeh}(h]h ]h"]h$]h&]hhuh1jhj9hhhj9hM ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj9hhhj9hM ubah}(h]j9ah ](jjeh"]h$]h&]jj)jhuh1jhj9hM hj9hhubj)}(hhh]h)}(h"Insert a request at dispatch list.h]h"Insert a request at dispatch list.}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj:hhubah}(h]h ]h"]h$]h&]uh1jhj9hhhj9hM ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j:j#j:j$j%j&uh1jhhhjwhNhNubj()}(h**Parameters** ``struct request *rq`` Pointer to request to be inserted. ``blk_insert_t flags`` BLK_MQ_INSERT_* **Description** Should only be used carefully, when the caller knows we want to bypass a potential IO scheduler on the target device.h](h)}(h**Parameters**h]j2)}(hj;h]h Parameters}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj;ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj:ubjH)}(hhh](jM)}(h:``struct request *rq`` Pointer to request to be inserted. h](jS)}(h``struct request *rq``h]j9)}(hj";h]hstruct request *rq}(hj$;hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj ;ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj;ubjm)}(hhh]h)}(h"Pointer to request to be inserted.h]h"Pointer to request to be inserted.}(hj;;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj7;hM hj8;ubah}(h]h ]h"]h$]h&]uh1jlhj;ubeh}(h]h ]h"]h$]h&]uh1jLhj7;hM hj;ubjM)}(h'``blk_insert_t flags`` BLK_MQ_INSERT_* h](jS)}(h``blk_insert_t flags``h]j9)}(hj[;h]hblk_insert_t flags}(hj];hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjY;ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjU;ubjm)}(hhh]h)}(hBLK_MQ_INSERT_*h]hBLK_MQ_INSERT_*}(hjt;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjp;hM hjq;ubah}(h]h ]h"]h$]h&]uh1jlhjU;ubeh}(h]h ]h"]h$]h&]uh1jLhjp;hM hj;ubeh}(h]h ]h"]h$]h&]uh1jGhj:ubh)}(h**Description**h]j2)}(hj;h]h Description}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj;ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj:ubh)}(huShould only be used carefully, when the caller knows we want to bypass a potential IO scheduler on the target device.h]huShould only be used carefully, when the caller knows we want to bypass a potential IO scheduler on the target device.}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj:ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&blk_mq_try_issue_directly (C function)c.blk_mq_try_issue_directlyhNtauh1jhjwhhhNhNubj)}(hhh](j)}(hOvoid blk_mq_try_issue_directly (struct blk_mq_hw_ctx *hctx, struct request *rq)h]j)}(hNvoid blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, struct request *rq)h](j)}(hvoidh]hvoid}(hj;hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj;hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM ubj)}(h h]h }(hj;hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj;hhhj;hM ubj)}(hblk_mq_try_issue_directlyh]j)}(hblk_mq_try_issue_directlyh]hblk_mq_try_issue_directly}(hj;hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj;ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj;hhhj;hM ubj)}(h0(struct blk_mq_hw_ctx *hctx, struct request *rq)h](j)}(hstruct blk_mq_hw_ctx *hctxh](j)}(hjh]hstruct}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubj)}(h h]h }(hj%<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubh)}(hhh]j)}(h blk_mq_hw_ctxh]h blk_mq_hw_ctx}(hj6<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj3<ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj8<modnameN classnameNjj)}j]j)}jj;sbc.blk_mq_try_issue_directlyasbuh1hhj<ubj)}(h h]h }(hjV<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubj )}(hj#h]h*}(hjd<hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj<ubj)}(hhctxh]hhctx}(hjq<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj<ubj)}(hstruct request *rqh](j)}(hjh]hstruct}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubj)}(h h]h }(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubh)}(hhh]j)}(hrequesth]hrequest}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj<modnameN classnameNjj)}j]jR<c.blk_mq_try_issue_directlyasbuh1hhj<ubj)}(h h]h }(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubj )}(hj#h]h*}(hj<hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj<ubj)}(hrqh]hrq}(hj<hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj<ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj<ubeh}(h]h ]h"]h$]h&]hhuh1jhj;hhhj;hM ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj;hhhj;hM ubah}(h]j;ah ](jjeh"]h$]h&]jj)jhuh1jhj;hM hj;hhubj)}(hhh]h)}(h0Try to send a request directly to device driver.h]h0Try to send a request directly to device driver.}(hj =hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj=hhubah}(h]h ]h"]h$]h&]uh1jhj;hhhj;hM ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j#=j#j#=j$j%j&uh1jhhhjwhNhNubj()}(hX**Parameters** ``struct blk_mq_hw_ctx *hctx`` Pointer of the associated hardware queue. ``struct request *rq`` Pointer to request to be sent. **Description** If the device has enough resources to accept a new request now, send the request directly to device driver. Else, insert at hctx->dispatch queue, so we can try send it another time in the future. Requests inserted at this queue have higher priority.h](h)}(h**Parameters**h]j2)}(hj-=h]h Parameters}(hj/=hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj+=ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj'=ubjH)}(hhh](jM)}(hI``struct blk_mq_hw_ctx *hctx`` Pointer of the associated hardware queue. h](jS)}(h``struct blk_mq_hw_ctx *hctx``h]j9)}(hjL=h]hstruct blk_mq_hw_ctx *hctx}(hjN=hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjJ=ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjF=ubjm)}(hhh]h)}(h)Pointer of the associated hardware queue.h]h)Pointer of the associated hardware queue.}(hje=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhja=hM hjb=ubah}(h]h ]h"]h$]h&]uh1jlhjF=ubeh}(h]h ]h"]h$]h&]uh1jLhja=hM hjC=ubjM)}(h6``struct request *rq`` Pointer to request to be sent. h](jS)}(h``struct request *rq``h]j9)}(hj=h]hstruct request *rq}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj=ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj=ubjm)}(hhh]h)}(hPointer to request to be sent.h]hPointer to request to be sent.}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=hM hj=ubah}(h]h ]h"]h$]h&]uh1jlhj=ubeh}(h]h ]h"]h$]h&]uh1jLhj=hM hjC=ubeh}(h]h ]h"]h$]h&]uh1jGhj'=ubh)}(h**Description**h]j2)}(hj=h]h Description}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj=ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj'=ubh)}(hIf the device has enough resources to accept a new request now, send the request directly to device driver. Else, insert at hctx->dispatch queue, so we can try send it another time in the future. Requests inserted at this queue have higher priority.h]hIf the device has enough resources to accept a new request now, send the request directly to device driver. Else, insert at hctx->dispatch queue, so we can try send it another time in the future. Requests inserted at this queue have higher priority.}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj'=ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_mq_submit_bio (C function)c.blk_mq_submit_biohNtauh1jhjwhhhNhNubj)}(hhh](j)}(h(void blk_mq_submit_bio (struct bio *bio)h]j)}(h'void blk_mq_submit_bio(struct bio *bio)h](j)}(hvoidh]hvoid}(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM ubj)}(h h]h }(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>hhhj>hM ubj)}(hblk_mq_submit_bioh]j)}(hblk_mq_submit_bioh]hblk_mq_submit_bio}(hj&>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj">ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj>hhhj>hM ubj)}(h(struct bio *bio)h]j)}(hstruct bio *bioh](j)}(hjh]hstruct}(hjB>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>>ubj)}(h h]h }(hjO>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>>ubh)}(hhh]j)}(hbioh]hbio}(hj`>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj]>ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjb>modnameN classnameNjj)}j]j)}jj(>sbc.blk_mq_submit_bioasbuh1hhj>>ubj)}(h h]h }(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>>ubj )}(hj#h]h*}(hj>hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj>>ubj)}(hbioh]hbio}(hj>hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj>>ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj:>ubah}(h]h ]h"]h$]h&]hhuh1jhj>hhhj>hM ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj=hhhj>hM ubah}(h]j=ah ](jjeh"]h$]h&]jj)jhuh1jhj>hM hj=hhubj)}(hhh]h)}(h*Create and send a request to block device.h]h*Create and send a request to block device.}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj>hhubah}(h]h ]h"]h$]h&]uh1jhj=hhhj>hM ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j>j#j>j$j%j&uh1jhhhjwhNhNubj()}(hX**Parameters** ``struct bio *bio`` Bio pointer. **Description** Builds up a request structure from **q** and **bio** and send to the device. The request may not be queued directly to hardware if: * This request can be merged with another one * We want to place request at plug queue for possible future merging * There is an IO scheduler active at this queue It will not queue the request if there is an error with the bio, or at the request creation.h](h)}(h**Parameters**h]j2)}(hj>h]h Parameters}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj>ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj>ubjH)}(hhh]jM)}(h!``struct bio *bio`` Bio pointer. h](jS)}(h``struct bio *bio``h]j9)}(hj?h]hstruct bio *bio}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj?ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj?ubjm)}(hhh]h)}(h Bio pointer.h]h Bio pointer.}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj?hM hj?ubah}(h]h ]h"]h$]h&]uh1jlhj?ubeh}(h]h ]h"]h$]h&]uh1jLhj?hM hj>ubah}(h]h ]h"]h$]h&]uh1jGhj>ubh)}(h**Description**h]j2)}(hjA?h]h Description}(hjC?hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj??ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj>ubh)}(hX&Builds up a request structure from **q** and **bio** and send to the device. The request may not be queued directly to hardware if: * This request can be merged with another one * We want to place request at plug queue for possible future merging * There is an IO scheduler active at this queueh](h#Builds up a request structure from }(hjW?hhhNhNubj2)}(h**q**h]hq}(hj_?hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjW?ubh and }(hjW?hhhNhNubj2)}(h**bio**h]hbio}(hjq?hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjW?ubh and send to the device. The request may not be queued directly to hardware if: * This request can be merged with another one * We want to place request at plug queue for possible future merging * There is an IO scheduler active at this queue}(hjW?hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj>ubh)}(h\It will not queue the request if there is an error with the bio, or at the request creation.h]h\It will not queue the request if there is an error with the bio, or at the request creation.}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj>ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j&blk_insert_cloned_request (C function)c.blk_insert_cloned_requesthNtauh1jhjwhhhNhNubj)}(hhh](j)}(h;blk_status_t blk_insert_cloned_request (struct request *rq)h]j)}(h:blk_status_t blk_insert_cloned_request(struct request *rq)h](h)}(hhh]j)}(h blk_status_th]h blk_status_t}(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj?modnameN classnameNjj)}j]j)}jblk_insert_cloned_requestsbc.blk_insert_cloned_requestasbuh1hhj?hhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMp ubj)}(h h]h }(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?hhhj?hMp ubj)}(hblk_insert_cloned_requesth]j)}(hj?h]hblk_insert_cloned_request}(hj?hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj?ubah}(h]h ](jjeh"]h$]h&]hhuh1jhj?hhhj?hMp ubj)}(h(struct request *rq)h]j)}(hstruct request *rqh](j)}(hjh]hstruct}(hj @hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@ubj)}(h h]h }(hj@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@ubh)}(hhh]j)}(hrequesth]hrequest}(hj)@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&@ubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj+@modnameN classnameNjj)}j]j?c.blk_insert_cloned_requestasbuh1hhj@ubj)}(h h]h }(hjG@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@ubj )}(hj#h]h*}(hjU@hhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj@ubj)}(hrqh]hrq}(hjb@hhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj@ubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj@ubah}(h]h ]h"]h$]h&]hhuh1jhj?hhhj?hMp ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhj?hhhj?hMp ubah}(h]j?ah ](jjeh"]h$]h&]jj)jhuh1jhj?hMp hj?hhubj)}(hhh]h)}(h/Helper for stacking drivers to submit a requesth]h/Helper for stacking drivers to submit a request}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMm hj@hhubah}(h]h ]h"]h$]h&]uh1jhj?hhhj?hMp ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j@j#j@j$j%j&uh1jhhhjwhNhNubj()}(hA**Parameters** ``struct request *rq`` the request being queuedh](h)}(h**Parameters**h]j2)}(hj@h]h Parameters}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj@ubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMq hj@ubjH)}(hhh]jM)}(h/``struct request *rq`` the request being queuedh](jS)}(h``struct request *rq``h]j9)}(hj@h]hstruct request *rq}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj@ubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMs hj@ubjm)}(hhh]h)}(hthe request being queuedh]hthe request being queued}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMn hj@ubah}(h]h ]h"]h$]h&]uh1jlhj@ubeh}(h]h ]h"]h$]h&]uh1jLhj@hMs hj@ubah}(h]h ]h"]h$]h&]uh1jGhj@ubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j blk_rq_unprep_clone (C function)c.blk_rq_unprep_clonehNtauh1jhjwhhhNhNubj)}(hhh](j)}(h-void blk_rq_unprep_clone (struct request *rq)h]j)}(h,void blk_rq_unprep_clone(struct request *rq)h](j)}(hvoidh]hvoid}(hj'AhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj#AhhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM ubj)}(h h]h }(hj6AhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj#Ahhhj5AhM ubj)}(hblk_rq_unprep_cloneh]j)}(hblk_rq_unprep_cloneh]hblk_rq_unprep_clone}(hjHAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDAubah}(h]h ](jjeh"]h$]h&]hhuh1jhj#Ahhhj5AhM ubj)}(h(struct request *rq)h]j)}(hstruct request *rqh](j)}(hjh]hstruct}(hjdAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`Aubj)}(h h]h }(hjqAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`Aubh)}(hhh]j)}(hrequesth]hrequest}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjAubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjAmodnameN classnameNjj)}j]j)}jjJAsbc.blk_rq_unprep_cloneasbuh1hhj`Aubj)}(h h]h }(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`Aubj )}(hj#h]h*}(hjAhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhj`Aubj)}(hrqh]hrq}(hjAhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj`Aubeh}(h]h ]h"]h$]h&]noemphhhuh1jhj\Aubah}(h]h ]h"]h$]h&]hhuh1jhj#Ahhhj5AhM ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhjAhhhj5AhM ubah}(h]jAah ](jjeh"]h$]h&]jj)jhuh1jhj5AhM hjAhhubj)}(hhh]h)}(h4Helper function to free all bios in a cloned requesth]h4Helper function to free all bios in a cloned request}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjAhhubah}(h]h ]h"]h$]h&]uh1jhjAhhhj5AhM ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"jAj#jAj$j%j&uh1jhhhjwhNhNubj()}(h**Parameters** ``struct request *rq`` the clone request to be cleaned up **Description** Free all bios in **rq** for a cloned request.h](h)}(h**Parameters**h]j2)}(hj Bh]h Parameters}(hj BhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjBubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjBubjH)}(hhh]jM)}(h:``struct request *rq`` the clone request to be cleaned up h](jS)}(h``struct request *rq``h]j9)}(hj(Bh]hstruct request *rq}(hj*BhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj&Bubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj"Bubjm)}(hhh]h)}(h"the clone request to be cleaned uph]h"the clone request to be cleaned up}(hjABhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=BhM hj>Bubah}(h]h ]h"]h$]h&]uh1jlhj"Bubeh}(h]h ]h"]h$]h&]uh1jLhj=BhM hjBubah}(h]h ]h"]h$]h&]uh1jGhjBubh)}(h**Description**h]j2)}(hjcBh]h Description}(hjeBhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjaBubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjBubj))}(h-Free all bios in **rq** for a cloned request.h]h)}(hj{Bh](hFree all bios in }(hj}BhhhNhNubj2)}(h**rq**h]hrq}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj}Bubh for a cloned request.}(hj}BhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjyBubah}(h]h ]h"]h$]h&]uh1j)hjBhM hjBubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](jblk_rq_prep_clone (C function)c.blk_rq_prep_clonehNtauh1jhjwhhhNhNubj)}(hhh](j)}(hint blk_rq_prep_clone (struct request *rq, struct request *rq_src, struct bio_set *bs, gfp_t gfp_mask, int (*bio_ctr)(struct bio *, struct bio *, void *), void *data)h]j)}(hint blk_rq_prep_clone(struct request *rq, struct request *rq_src, struct bio_set *bs, gfp_t gfp_mask, int (*bio_ctr)(struct bio*, struct bio*, void*), void *data)h](j)}(hinth]hint}(hjBhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBhhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM ubj)}(h h]h }(hjBhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBhhhjBhM ubj)}(hblk_rq_prep_cloneh]j)}(hblk_rq_prep_cloneh]hblk_rq_prep_clone}(hjBhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubah}(h]h ](jjeh"]h$]h&]hhuh1jhjBhhhjBhM ubj)}(h(struct request *rq, struct request *rq_src, struct bio_set *bs, gfp_t gfp_mask, int (*bio_ctr)(struct bio*, struct bio*, void*), void *data)h](j)}(hstruct request *rqh](j)}(hjh]hstruct}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubj)}(h h]h }(hj ChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubh)}(hhh]j)}(hrequesth]hrequest}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj CmodnameN classnameNjj)}j]j)}jjBsbc.blk_rq_prep_cloneasbuh1hhjBubj)}(h h]h }(hj>ChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubj )}(hj#h]h*}(hjLChhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjBubj)}(hrqh]hrq}(hjYChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjBubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjBubj)}(hstruct request *rq_srch](j)}(hjh]hstruct}(hjrChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjnCubj)}(h h]h }(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjnCubh)}(hhh]j)}(hrequesth]hrequest}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjCmodnameN classnameNjj)}j]j:Cc.blk_rq_prep_cloneasbuh1hhjnCubj)}(h h]h }(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjnCubj )}(hj#h]h*}(hjChhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjnCubj)}(hrq_srch]hrq_src}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjnCubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjBubj)}(hstruct bio_set *bsh](j)}(hjh]hstruct}(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubj)}(h h]h }(hjChhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubh)}(hhh]j)}(hbio_seth]hbio_set}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjDmodnameN classnameNjj)}j]j:Cc.blk_rq_prep_cloneasbuh1hhjCubj)}(h h]h }(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubj )}(hj#h]h*}(hj,DhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjCubj)}(hbsh]hbs}(hj9DhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjCubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjBubj)}(hgfp_t gfp_maskh](h)}(hhh]j)}(hgfp_th]hgfp_t}(hjUDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjRDubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjWDmodnameN classnameNjj)}j]j:Cc.blk_rq_prep_cloneasbuh1hhjNDubj)}(h h]h }(hjsDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNDubj)}(hgfp_maskh]hgfp_mask}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjNDubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjBubj)}(h/int (*bio_ctr)(struct bio*, struct bio*, void*)h](j)}(hinth]hint}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubj)}(h h]h }(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubj )}(hjih]h(}(hjDhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj )}(hj#h]h*}(hjDhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj)}(hbio_ctrh]hbio_ctr}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubj )}(hjh]h)}(hjDhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj )}(hjih]h(}(hjDhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj)}(hjh]hstruct}(hjDhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubj)}(h h]h }(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubh)}(hhh]j)}(hbioh]hbio}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetjEmodnameN classnameNjj)}j]j:Cc.blk_rq_prep_cloneasbuh1hhjDubj )}(hj#h]h*}(hj4EhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj )}(h,h]h,}(hjAEhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj)}(h h]h }(hjOEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubj)}(hjh]hstruct}(hj]EhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubj)}(h h]h }(hjjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubh)}(hhh]j)}(hbioh]hbio}(hj{EhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjxEubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj}EmodnameN classnameNjj)}j]j:Cc.blk_rq_prep_cloneasbuh1hhjDubj )}(hj#h]h*}(hjEhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj )}(hjCEh]h,}(hjEhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj)}(h h]h }(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubj)}(hvoidh]hvoid}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjDubj )}(hj#h]h*}(hjEhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubj )}(hjh]h)}(hjEhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjDubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjBubj)}(h void *datah](j)}(hvoidh]hvoid}(hjEhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubj)}(h h]h }(hjFhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubj )}(hj#h]h*}(hjFhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjEubj)}(hdatah]hdata}(hjFhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjEubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjBubeh}(h]h ]h"]h$]h&]hhuh1jhjBhhhjBhM ubeh}(h]h ]h"]h$]h&]hhjuh1jjjhjBhhhjBhM ubah}(h]jBah ](jjeh"]h$]h&]jj)jhuh1jhjBhM hjBhhubj)}(hhh]h)}(h&Helper function to setup clone requesth]h&Helper function to setup clone request}(hjGFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjDFhhubah}(h]h ]h"]h$]h&]uh1jhjBhhhjBhM ubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"j_Fj#j_Fj$j%j&uh1jhhhjwhNhNubj()}(hX1**Parameters** ``struct request *rq`` the request to be setup ``struct request *rq_src`` original request to be cloned ``struct bio_set *bs`` bio_set that bios for clone are allocated from ``gfp_t gfp_mask`` memory allocation mask for bio ``int (*bio_ctr)(struct bio *, struct bio *, void *)`` setup function to be called for each clone bio. Returns ``0`` for success, non ``0`` for failure. ``void *data`` private data to be passed to **bio_ctr** **Description** Clones bios in **rq_src** to **rq**, and copies attributes of **rq_src** to **rq**. Also, pages which the original bios are pointing to are not copied and the cloned bios just point same pages. So cloned bios must be completed before original bios, which means the caller must complete **rq** before **rq_src**.h](h)}(h**Parameters**h]j2)}(hjiFh]h Parameters}(hjkFhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjgFubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjcFubjH)}(hhh](jM)}(h/``struct request *rq`` the request to be setup h](jS)}(h``struct request *rq``h]j9)}(hjFh]hstruct request *rq}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjFubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjFubjm)}(hhh]h)}(hthe request to be setuph]hthe request to be setup}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhM hjFubah}(h]h ]h"]h$]h&]uh1jlhjFubeh}(h]h ]h"]h$]h&]uh1jLhjFhM hjFubjM)}(h9``struct request *rq_src`` original request to be cloned h](jS)}(h``struct request *rq_src``h]j9)}(hjFh]hstruct request *rq_src}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjFubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjFubjm)}(hhh]h)}(horiginal request to be clonedh]horiginal request to be cloned}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhM hjFubah}(h]h ]h"]h$]h&]uh1jlhjFubeh}(h]h ]h"]h$]h&]uh1jLhjFhM hjFubjM)}(hF``struct bio_set *bs`` bio_set that bios for clone are allocated from h](jS)}(h``struct bio_set *bs``h]j9)}(hjFh]hstruct bio_set *bs}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjFubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjFubjm)}(hhh]h)}(h.bio_set that bios for clone are allocated fromh]h.bio_set that bios for clone are allocated from}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjGhM hjGubah}(h]h ]h"]h$]h&]uh1jlhjFubeh}(h]h ]h"]h$]h&]uh1jLhjGhM hjFubjM)}(h2``gfp_t gfp_mask`` memory allocation mask for bio h](jS)}(h``gfp_t gfp_mask``h]j9)}(hj3Gh]hgfp_t gfp_mask}(hj5GhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hj1Gubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj-Gubjm)}(hhh]h)}(hmemory allocation mask for bioh]hmemory allocation mask for bio}(hjLGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjHGhM hjIGubah}(h]h ]h"]h$]h&]uh1jlhj-Gubeh}(h]h ]h"]h$]h&]uh1jLhjHGhM hjFubjM)}(h``int (*bio_ctr)(struct bio *, struct bio *, void *)`` setup function to be called for each clone bio. Returns ``0`` for success, non ``0`` for failure. h](jS)}(h6``int (*bio_ctr)(struct bio *, struct bio *, void *)``h]j9)}(hjlGh]h2int (*bio_ctr)(struct bio *, struct bio *, void *)}(hjnGhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjjGubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjfGubjm)}(hhh]h)}(hasetup function to be called for each clone bio. Returns ``0`` for success, non ``0`` for failure.h](h8setup function to be called for each clone bio. Returns }(hjGhhhNhNubj9)}(h``0``h]h0}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjGubh for success, non }(hjGhhhNhNubj9)}(h``0``h]h0}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjGubh for failure.}(hjGhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjGubah}(h]h ]h"]h$]h&]uh1jlhjfGubeh}(h]h ]h"]h$]h&]uh1jLhjGhM hjFubjM)}(h8``void *data`` private data to be passed to **bio_ctr** h](jS)}(h``void *data``h]j9)}(hjGh]h void *data}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjGubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjGubjm)}(hhh]h)}(h(private data to be passed to **bio_ctr**h](hprivate data to be passed to }(hjGhhhNhNubj2)}(h **bio_ctr**h]hbio_ctr}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjGubeh}(h]h ]h"]h$]h&]uh1hhjGhM hjGubah}(h]h ]h"]h$]h&]uh1jlhjGubeh}(h]h ]h"]h$]h&]uh1jLhjGhM hjFubeh}(h]h ]h"]h$]h&]uh1jGhjcFubh)}(h**Description**h]j2)}(hjHh]h Description}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjHubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjcFubj))}(hX7Clones bios in **rq_src** to **rq**, and copies attributes of **rq_src** to **rq**. Also, pages which the original bios are pointing to are not copied and the cloned bios just point same pages. So cloned bios must be completed before original bios, which means the caller must complete **rq** before **rq_src**.h]h)}(hX7Clones bios in **rq_src** to **rq**, and copies attributes of **rq_src** to **rq**. Also, pages which the original bios are pointing to are not copied and the cloned bios just point same pages. So cloned bios must be completed before original bios, which means the caller must complete **rq** before **rq_src**.h](hClones bios in }(hj-HhhhNhNubj2)}(h **rq_src**h]hrq_src}(hj5HhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj-Hubh to }(hj-HhhhNhNubj2)}(h**rq**h]hrq}(hjGHhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj-Hubh, and copies attributes of }(hj-HhhhNhNubj2)}(h **rq_src**h]hrq_src}(hjYHhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj-Hubh to }hj-Hsbj2)}(h**rq**h]hrq}(hjkHhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj-Hubh. Also, pages which the original bios are pointing to are not copied and the cloned bios just point same pages. So cloned bios must be completed before original bios, which means the caller must complete }(hj-HhhhNhNubj2)}(h**rq**h]hrq}(hj}HhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj-Hubh before }(hj-HhhhNhNubj2)}(h **rq_src**h]hrq_src}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj-Hubh.}(hj-HhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hj)Hubah}(h]h ]h"]h$]h&]uh1j)hjHhM hjcFubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubj)}(hhh]h}(h]h ]h"]h$]h&]entries](j!blk_mq_destroy_queue (C function)c.blk_mq_destroy_queuehNtauh1jhjwhhhNhNubj)}(hhh](j)}(h3void blk_mq_destroy_queue (struct request_queue *q)h]j)}(h2void blk_mq_destroy_queue(struct request_queue *q)h](j)}(hvoidh]hvoid}(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjHhhhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMubj)}(h h]h }(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjHhhhjHhMubj)}(hblk_mq_destroy_queueh]j)}(hblk_mq_destroy_queueh]hblk_mq_destroy_queue}(hjHhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjHubah}(h]h ](jjeh"]h$]h&]hhuh1jhjHhhhjHhMubj)}(h(struct request_queue *q)h]j)}(hstruct request_queue *qh](j)}(hjh]hstruct}(hj IhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubj)}(h h]h }(hjIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubh)}(hhh]j)}(h request_queueh]h request_queue}(hj)IhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhj&Iubah}(h]h ]h"]h$]h&] refdomainjreftypej reftargetj+ImodnameN classnameNjj)}j]j)}jjHsbc.blk_mq_destroy_queueasbuh1hhjIubj)}(h h]h }(hjIIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubj )}(hj#h]h*}(hjWIhhhNhNubah}(h]h ]j,ah"]h$]h&]uh1jhjIubj)}(hj3&h]hq}(hjdIhhhNhNubah}(h]h ]jah"]h$]h&]uh1jhjIubeh}(h]h ]h"]h$]h&]noemphhhuh1jhjIubah}(h]h ]h"]h$]h&]hhuh1jhjHhhhjHhMubeh}(h]h ]h"]h$]h&]hhjuh1jjjhjHhhhjHhMubah}(h]jHah ](jjeh"]h$]h&]jj)jhuh1jhjHhMhjHhhubj)}(hhh]h)}(hshutdown a request queueh]hshutdown a request queue}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjIhhubah}(h]h ]h"]h$]h&]uh1jhjHhhhjHhMubeh}(h]h ](jfunctioneh"]h$]h&]j!jj"jIj#jIj$j%j&uh1jhhhjwhNhNubj()}(hXP**Parameters** ``struct request_queue *q`` request queue to shutdown **Description** This shuts down a request queue allocated by blk_mq_alloc_queue(). All future requests will be failed with -ENODEV. The caller is responsible for dropping the reference from blk_mq_alloc_queue() by calling blk_put_queue(). **Context** can sleeph](h)}(h**Parameters**h]j2)}(hjIh]h Parameters}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjIubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjIubjH)}(hhh]jM)}(h6``struct request_queue *q`` request queue to shutdown h](jS)}(h``struct request_queue *q``h]j9)}(hjIh]hstruct request_queue *q}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1j8hjIubah}(h]h ]h"]h$]h&]uh1jRhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjIubjm)}(hhh]h)}(hrequest queue to shutdownh]hrequest queue to shutdown}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIhM hjIubah}(h]h ]h"]h$]h&]uh1jlhjIubeh}(h]h ]h"]h$]h&]uh1jLhjIhM hjIubah}(h]h ]h"]h$]h&]uh1jGhjIubh)}(h**Description**h]j2)}(hj Jh]h Description}(hj JhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hjJubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjIubh)}(hThis shuts down a request queue allocated by blk_mq_alloc_queue(). All future requests will be failed with -ENODEV. The caller is responsible for dropping the reference from blk_mq_alloc_queue() by calling blk_put_queue().h]hThis shuts down a request queue allocated by blk_mq_alloc_queue(). All future requests will be failed with -ENODEV. The caller is responsible for dropping the reference from blk_mq_alloc_queue() by calling blk_put_queue().}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chM hjIubh)}(h **Context**h]j2)}(hj0Jh]hContext}(hj2JhhhNhNubah}(h]h ]h"]h$]h&]uh1j1hj.Jubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjIubh)}(h can sleeph]h can sleep}(hjFJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhL/var/lib/git/docbuild/linux/Documentation/block/blk-mq:153: ./block/blk-mq.chMhjIubeh}(h]h ] kernelindentah"]h$]h&]uh1j'hjwhhhNhNubeh}(h]source-code-documentationah ]h"]source code documentationah$]h&]uh1hhhhhhhhKubeh}(h].multi-queue-block-io-queueing-mechanism-blk-mqah ]h"]0multi-queue block io queueing mechanism (blk-mq)ah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjJerror_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames} operation]j!asrefids}nameids}(jiJjfJjtjqjCj@jj2jj jjjjjjjjjljijjjjjOjLjaJj^Ju nametypes}(jiJjtjCjjjjjjjljjjOjaJuh}(jfJhjqhj@hj2jFj jjjjjjjjjjijjjjjjLjFj^Jjwjjjjj* j/ jjjdjijjjVj[jOjTjjj j jX"j]"j $j$j%j%j'j'j*j*j),j.,j-j-j/j/j2j2j34j84j!6j&6j7j7j9j9j;j;j=j=j?j?jAjAjBjBjHjHu footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.