sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget1/translations/zh_CN/admin-guide/device-mapper/vdomodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget1/translations/zh_TW/admin-guide/device-mapper/vdomodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget1/translations/it_IT/admin-guide/device-mapper/vdomodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget1/translations/ja_JP/admin-guide/device-mapper/vdomodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget1/translations/ko_KR/admin-guide/device-mapper/vdomodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget1/translations/sp_SP/admin-guide/device-mapper/vdomodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhcomment)}(h%SPDX-License-Identifier: GPL-2.0-onlyh]h%SPDX-License-Identifier: GPL-2.0-only}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhhhK/var/lib/git/docbuild/linux/Documentation/admin-guide/device-mapper/vdo.rsthKubhsection)}(hhh](htitle)}(hdm-vdoh]hdm-vdo}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh paragraph)}(hXThe dm-vdo (virtual data optimizer) device mapper target provides block-level deduplication, compression, and thin provisioning. As a device mapper target, it can add these features to the storage stack, compatible with any file system. The vdo target does not protect against data corruption, relying instead on integrity protection of the storage below it. It is strongly recommended that lvm be used to manage vdo volumes. See lvmvdo(7).h]hXThe dm-vdo (virtual data optimizer) device mapper target provides block-level deduplication, compression, and thin provisioning. As a device mapper target, it can add these features to the storage stack, compatible with any file system. The vdo target does not protect against data corruption, relying instead on integrity protection of the storage below it. It is strongly recommended that lvm be used to manage vdo volumes. See lvmvdo(7).}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubeh}(h]dm-vdoah ]h"]dm-vdoah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hUserspace componenth]hUserspace component}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hOFormatting a vdo volume requires the use of the 'vdoformat' tool, available at:h]hSFormatting a vdo volume requires the use of the ‘vdoformat’ tool, available at:}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhttps://github.com/dm-vdo/vdo/h]h reference)}(hjh]hhttps://github.com/dm-vdo/vdo/}(hjhhhNhNubah}(h]h ]h"]h$]h&]refurijuh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hXIn most cases, a vdo target will recover from a crash automatically the next time it is started. In cases where it encountered an unrecoverable error (either during normal operation or crash recovery) the target will enter or come up in read-only mode. Because read-only mode is indicative of data-loss, a positive action must be taken to bring vdo out of read-only mode. The 'vdoforcerebuild' tool, available from the same repo, is used to prepare a read-only vdo to exit read-only mode. After running this tool, the vdo target will rebuild its metadata the next time it is started. Although some data may be lost, the rebuilt vdo's metadata will be internally consistent and the target will be writable again.h]hXIn most cases, a vdo target will recover from a crash automatically the next time it is started. In cases where it encountered an unrecoverable error (either during normal operation or crash recovery) the target will enter or come up in read-only mode. Because read-only mode is indicative of data-loss, a positive action must be taken to bring vdo out of read-only mode. The ‘vdoforcerebuild’ tool, available from the same repo, is used to prepare a read-only vdo to exit read-only mode. After running this tool, the vdo target will rebuild its metadata the next time it is started. Although some data may be lost, the rebuilt vdo’s metadata will be internally consistent and the target will be writable again.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hThe repo also contains additional userspace tools which can be used to inspect a vdo target's on-disk metadata. Fortunately, these tools are rarely needed except by dm-vdo developers.h]hThe repo also contains additional userspace tools which can be used to inspect a vdo target’s on-disk metadata. Fortunately, these tools are rarely needed except by dm-vdo developers.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK!hhhhubeh}(h]userspace-componentah ]h"]userspace componentah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hMetadata requirementsh]hMetadata requirements}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj>hhhhhK&ubh)}(hXWEach vdo volume reserves 3GB of space for metadata, or more depending on its configuration. It is helpful to check that the space saved by deduplication and compression is not cancelled out by the metadata requirements. An estimation of the space saved for a specific dataset can be computed with the vdo estimator tool, which is available at:h]hXWEach vdo volume reserves 3GB of space for metadata, or more depending on its configuration. It is helpful to check that the space saved by deduplication and compression is not cancelled out by the metadata requirements. An estimation of the space saved for a specific dataset can be computed with the vdo estimator tool, which is available at:}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK(hj>hhubh)}(h'https://github.com/dm-vdo/vdoestimator/h]j)}(hj_h]h'https://github.com/dm-vdo/vdoestimator/}(hjahhhNhNubah}(h]h ]h"]h$]h&]refurij_uh1jhj]ubah}(h]h ]h"]h$]h&]uh1hhhhK.hj>hhubeh}(h]metadata-requirementsah ]h"]metadata requirementsah$]h&]uh1hhhhhhhhK&ubh)}(hhh](h)}(hTarget interfaceh]hTarget interface}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hhhhhK1ubh)}(hhh](h)}(h Table lineh]h Table line}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK4ubh literal_block)}(h vdo V4 [optional arguments]h]h vdo V4 [optional arguments]}hjsbah}(h]h ]h"]h$]h&]hhuh1jhhhK8hjhhubh)}(hRequired parameters:h]hRequired parameters:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK=hjhhubh block_quote)}(hX-offset: The offset, in sectors, at which the vdo volume's logical space begins. logical device size: The size of the device which the vdo volume will service, in sectors. Must match the current logical size of the vdo volume. storage device: The device holding the vdo volume's data and metadata. storage device size: The size of the device holding the vdo volume, as a number of 4096-byte blocks. Must match the current size of the vdo volume. minimum I/O size: The minimum I/O size for this vdo volume to accept, in bytes. Valid values are 512 or 4096. The recommended value is 4096. block map cache size: The size of the block map cache, as a number of 4096-byte blocks. The minimum and recommended value is 32768 blocks. If the logical thread count is non-zero, the cache size must be at least 4096 blocks per logical thread. block map era length: The speed with which the block map cache writes out modified block map pages. A smaller era length is likely to reduce the amount of time spent rebuilding, at the cost of increased block map writes during normal operation. The maximum and recommended value is 16380; the minimum value is 1. h]hdefinition_list)}(hhh](hdefinition_list_item)}(hPoffset: The offset, in sectors, at which the vdo volume's logical space begins. h](hterm)}(hoffset:h]hoffset:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKAhjubh definition)}(hhh]h)}(hGThe offset, in sectors, at which the vdo volume's logical space begins.h]hIThe offset, in sectors, at which the vdo volume’s logical space begins.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK@hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKAhjubj)}(hlogical device size: The size of the device which the vdo volume will service, in sectors. Must match the current logical size of the vdo volume. h](j)}(hlogical device size:h]hlogical device size:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKFhjubj)}(hhh]h)}(h|The size of the device which the vdo volume will service, in sectors. Must match the current logical size of the vdo volume.h]h|The size of the device which the vdo volume will service, in sectors. Must match the current logical size of the vdo volume.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKDhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKFhjubj)}(hGstorage device: The device holding the vdo volume's data and metadata. h](j)}(hstorage device:h]hstorage device:}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKIhj,ubj)}(hhh]h)}(h6The device holding the vdo volume's data and metadata.h]h8The device holding the vdo volume’s data and metadata.}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKIhj>ubah}(h]h ]h"]h$]h&]uh1jhj,ubeh}(h]h ]h"]h$]h&]uh1jhhhKIhjubj)}(hstorage device size: The size of the device holding the vdo volume, as a number of 4096-byte blocks. Must match the current size of the vdo volume. h](j)}(hstorage device size:h]hstorage device size:}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKNhj[ubj)}(hhh]h)}(h~The size of the device holding the vdo volume, as a number of 4096-byte blocks. Must match the current size of the vdo volume.h]h~The size of the device holding the vdo volume, as a number of 4096-byte blocks. Must match the current size of the vdo volume.}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKLhjmubah}(h]h ]h"]h$]h&]uh1jhj[ubeh}(h]h ]h"]h$]h&]uh1jhhhKNhjubj)}(hminimum I/O size: The minimum I/O size for this vdo volume to accept, in bytes. Valid values are 512 or 4096. The recommended value is 4096. h](j)}(hminimum I/O size:h]hminimum I/O size:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKShjubj)}(hhh]h)}(hzThe minimum I/O size for this vdo volume to accept, in bytes. Valid values are 512 or 4096. The recommended value is 4096.h]hzThe minimum I/O size for this vdo volume to accept, in bytes. Valid values are 512 or 4096. The recommended value is 4096.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKQhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKShjubj)}(hblock map cache size: The size of the block map cache, as a number of 4096-byte blocks. The minimum and recommended value is 32768 blocks. If the logical thread count is non-zero, the cache size must be at least 4096 blocks per logical thread. h](j)}(hblock map cache size:h]hblock map cache size:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKYhjubj)}(hhh]h)}(hThe size of the block map cache, as a number of 4096-byte blocks. The minimum and recommended value is 32768 blocks. If the logical thread count is non-zero, the cache size must be at least 4096 blocks per logical thread.h]hThe size of the block map cache, as a number of 4096-byte blocks. The minimum and recommended value is 32768 blocks. If the logical thread count is non-zero, the cache size must be at least 4096 blocks per logical thread.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKVhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKYhjubj)}(hX9block map era length: The speed with which the block map cache writes out modified block map pages. A smaller era length is likely to reduce the amount of time spent rebuilding, at the cost of increased block map writes during normal operation. The maximum and recommended value is 16380; the minimum value is 1. h](j)}(hblock map era length:h]hblock map era length:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKahjubj)}(hhh]h)}(hX"The speed with which the block map cache writes out modified block map pages. A smaller era length is likely to reduce the amount of time spent rebuilding, at the cost of increased block map writes during normal operation. The maximum and recommended value is 16380; the minimum value is 1.h]hX"The speed with which the block map cache writes out modified block map pages. A smaller era length is likely to reduce the amount of time spent rebuilding, at the cost of increased block map writes during normal operation. The maximum and recommended value is 16380; the minimum value is 1.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK\hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKahjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhhhK?hjhhubeh}(h] table-lineah ]h"] table lineah$]h&]uh1hhj}hhhhhK4ubh)}(hhh](h)}(hOptional parameters:h]hOptional parameters:}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj+hhhhhKdubh)}(hHSome or all of these parameters may be specified as pairs.h]hHSome or all of these parameters may be specified as pairs.}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKehj+hhubh)}(hThread related parameters:h]hThread related parameters:}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKghj+hhubh)}(hDifferent categories of work are assigned to separate thread groups, and the number of threads in each group can be configured separately.h]hDifferent categories of work are assigned to separate thread groups, and the number of threads in each group can be configured separately.}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKihj+hhubh)}(hIf , , and are all set to 0, the work handled by all three thread types will be handled by a single thread. If any of these values are non-zero, all of them must be non-zero.h]hIf , , and are all set to 0, the work handled by all three thread types will be handled by a single thread. If any of these values are non-zero, all of them must be non-zero.}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKlhj+hhubj)}(hXack: The number of threads used to complete bios. Since completing a bio calls an arbitrary completion function outside the vdo volume, threads of this type allow the vdo volume to continue processing requests even when bio completion is slow. The default is 1. bio: The number of threads used to issue bios to the underlying storage. Threads of this type allow the vdo volume to continue processing requests even when bio submission is slow. The default is 4. bioRotationInterval: The number of bios to enqueue on each bio thread before switching to the next thread. The value must be greater than 0 and not more than 1024; the default is 64. cpu: The number of threads used to do CPU-intensive work, such as hashing and compression. The default is 1. hash: The number of threads used to manage data comparisons for deduplication based on the hash value of data blocks. The default is 0. logical: The number of threads used to manage caching and locking based on the logical address of incoming bios. The default is 0; the maximum is 60. physical: The number of threads used to manage administration of the underlying storage device. At format time, a slab size for the vdo is chosen; the vdo storage device must be large enough to have at least 1 slab per physical thread. The default is 0; the maximum is 16. h]j)}(hhh](j)}(hXack: The number of threads used to complete bios. Since completing a bio calls an arbitrary completion function outside the vdo volume, threads of this type allow the vdo volume to continue processing requests even when bio completion is slow. The default is 1. h](j)}(hack:h]hack:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKuhj{ubj)}(hhh]h)}(hXThe number of threads used to complete bios. Since completing a bio calls an arbitrary completion function outside the vdo volume, threads of this type allow the vdo volume to continue processing requests even when bio completion is slow. The default is 1.h]hXThe number of threads used to complete bios. Since completing a bio calls an arbitrary completion function outside the vdo volume, threads of this type allow the vdo volume to continue processing requests even when bio completion is slow. The default is 1.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKqhjubah}(h]h ]h"]h$]h&]uh1jhj{ubeh}(h]h ]h"]h$]h&]uh1jhhhKuhjxubj)}(hbio: The number of threads used to issue bios to the underlying storage. Threads of this type allow the vdo volume to continue processing requests even when bio submission is slow. The default is 4. h](j)}(hbio:h]hbio:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhK{hjubj)}(hhh]h)}(hThe number of threads used to issue bios to the underlying storage. Threads of this type allow the vdo volume to continue processing requests even when bio submission is slow. The default is 4.h]hThe number of threads used to issue bios to the underlying storage. Threads of this type allow the vdo volume to continue processing requests even when bio submission is slow. The default is 4.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhK{hjxubj)}(hbioRotationInterval: The number of bios to enqueue on each bio thread before switching to the next thread. The value must be greater than 0 and not more than 1024; the default is 64. h](j)}(hbioRotationInterval:h]hbioRotationInterval:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hhh]h)}(hThe number of bios to enqueue on each bio thread before switching to the next thread. The value must be greater than 0 and not more than 1024; the default is 64.h]hThe number of bios to enqueue on each bio thread before switching to the next thread. The value must be greater than 0 and not more than 1024; the default is 64.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK~hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjxubj)}(hmcpu: The number of threads used to do CPU-intensive work, such as hashing and compression. The default is 1. h](j)}(hcpu:h]hcpu:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hhh]h)}(hgThe number of threads used to do CPU-intensive work, such as hashing and compression. The default is 1.h]hgThe number of threads used to do CPU-intensive work, such as hashing and compression. The default is 1.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjxubj)}(hhash: The number of threads used to manage data comparisons for deduplication based on the hash value of data blocks. The default is 0. h](j)}(hhash:h]hhash:}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhj7ubj)}(hhh]h)}(hThe number of threads used to manage data comparisons for deduplication based on the hash value of data blocks. The default is 0.h]hThe number of threads used to manage data comparisons for deduplication based on the hash value of data blocks. The default is 0.}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjIubah}(h]h ]h"]h$]h&]uh1jhj7ubeh}(h]h ]h"]h$]h&]uh1jhhhKhjxubj)}(hlogical: The number of threads used to manage caching and locking based on the logical address of incoming bios. The default is 0; the maximum is 60. h](j)}(hlogical:h]hlogical:}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjfubj)}(hhh]h)}(hThe number of threads used to manage caching and locking based on the logical address of incoming bios. The default is 0; the maximum is 60.h]hThe number of threads used to manage caching and locking based on the logical address of incoming bios. The default is 0; the maximum is 60.}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjxubah}(h]h ]h"]h$]h&]uh1jhjfubeh}(h]h ]h"]h$]h&]uh1jhhhKhjxubj)}(hXphysical: The number of threads used to manage administration of the underlying storage device. At format time, a slab size for the vdo is chosen; the vdo storage device must be large enough to have at least 1 slab per physical thread. The default is 0; the maximum is 16. h](j)}(h physical:h]h physical:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hhh]h)}(hXThe number of threads used to manage administration of the underlying storage device. At format time, a slab size for the vdo is chosen; the vdo storage device must be large enough to have at least 1 slab per physical thread. The default is 0; the maximum is 16.h]hXThe number of threads used to manage administration of the underlying storage device. At format time, a slab size for the vdo is chosen; the vdo storage device must be large enough to have at least 1 slab per physical thread. The default is 0; the maximum is 16.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjxubeh}(h]h ]h"]h$]h&]uh1jhjtubah}(h]h ]h"]h$]h&]uh1jhhhKphj+hhubh)}(hMiscellaneous parameters:h]hMiscellaneous parameters:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj+hhubj)}(hXvmaxDiscard: The maximum size of discard bio accepted, in 4096-byte blocks. I/O requests to a vdo volume are normally split into 4096-byte blocks, and processed up to 2048 at a time. However, discard requests to a vdo volume can be automatically split to a larger size, up to 4096-byte blocks in a single bio, and are limited to 1500 at a time. Increasing this value may provide better overall performance, at the cost of increased latency for the individual discard requests. The default and minimum is 1; the maximum is UINT_MAX / 4096. deduplication: Whether deduplication is enabled. The default is 'on'; the acceptable values are 'on' and 'off'. compression: Whether compression is enabled. The default is 'off'; the acceptable values are 'on' and 'off'. h]j)}(hhh](j)}(hX'maxDiscard: The maximum size of discard bio accepted, in 4096-byte blocks. I/O requests to a vdo volume are normally split into 4096-byte blocks, and processed up to 2048 at a time. However, discard requests to a vdo volume can be automatically split to a larger size, up to 4096-byte blocks in a single bio, and are limited to 1500 at a time. Increasing this value may provide better overall performance, at the cost of increased latency for the individual discard requests. The default and minimum is 1; the maximum is UINT_MAX / 4096. h](j)}(h maxDiscard:h]h maxDiscard:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hhh]h)}(hXThe maximum size of discard bio accepted, in 4096-byte blocks. I/O requests to a vdo volume are normally split into 4096-byte blocks, and processed up to 2048 at a time. However, discard requests to a vdo volume can be automatically split to a larger size, up to 4096-byte blocks in a single bio, and are limited to 1500 at a time. Increasing this value may provide better overall performance, at the cost of increased latency for the individual discard requests. The default and minimum is 1; the maximum is UINT_MAX / 4096.h]hXThe maximum size of discard bio accepted, in 4096-byte blocks. I/O requests to a vdo volume are normally split into 4096-byte blocks, and processed up to 2048 at a time. However, discard requests to a vdo volume can be automatically split to a larger size, up to 4096-byte blocks in a single bio, and are limited to 1500 at a time. Increasing this value may provide better overall performance, at the cost of increased latency for the individual discard requests. The default and minimum is 1; the maximum is UINT_MAX / 4096.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hpdeduplication: Whether deduplication is enabled. The default is 'on'; the acceptable values are 'on' and 'off'. h](j)}(hdeduplication:h]hdeduplication:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hhh]h)}(h`Whether deduplication is enabled. The default is 'on'; the acceptable values are 'on' and 'off'.h]hlWhether deduplication is enabled. The default is ‘on’; the acceptable values are ‘on’ and ‘off’.}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj&ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hmcompression: Whether compression is enabled. The default is 'off'; the acceptable values are 'on' and 'off'. h](j)}(h compression:h]h compression:}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjCubj)}(hhh]h)}(h_Whether compression is enabled. The default is 'off'; the acceptable values are 'on' and 'off'.h]hkWhether compression is enabled. The default is ‘off’; the acceptable values are ‘on’ and ‘off’.}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjUubah}(h]h ]h"]h$]h&]uh1jhjCubeh}(h]h ]h"]h$]h&]uh1jhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhhhKhj+hhubeh}(h]optional-parametersah ]h"]optional parameters:ah$]h&]uh1hhj}hhhhhKdubh)}(hhh](h)}(hDevice modificationh]hDevice modification}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hX A modified table may be loaded into a running, non-suspended vdo volume. The modifications will take effect when the device is next resumed. The modifiable parameters are , , , , and .h]hX A modified table may be loaded into a running, non-suspended vdo volume. The modifications will take effect when the device is next resumed. The modifiable parameters are , , , , and .}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXeIf the logical device size or physical device size are changed, upon successful resume vdo will store the new values and require them on future startups. These two parameters may not be decreased. The logical device size may not exceed 4 PB. The physical device size must increase by at least 32832 4096-byte blocks if at all, and must not exceed the size of the underlying storage device. Additionally, when formatting the vdo device, a slab size is chosen: the physical device size may never increase above the size which provides 8192 slabs, and each increase must be large enough to add at least one new slab.h]hXeIf the logical device size or physical device size are changed, upon successful resume vdo will store the new values and require them on future startups. These two parameters may not be decreased. The logical device size may not exceed 4 PB. The physical device size must increase by at least 32832 4096-byte blocks if at all, and must not exceed the size of the underlying storage device. Additionally, when formatting the vdo device, a slab size is chosen: the physical device size may never increase above the size which provides 8192 slabs, and each increase must be large enough to add at least one new slab.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(h Examples:h]h Examples:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hStart a previously-formatted vdo volume with 1 GB logical space and 1 GB physical space, storing to /dev/dm-1 which has more than 1 GB of space.h]hStart a previously-formatted vdo volume with 1 GB logical space and 1 GB physical space, storing to /dev/dm-1 which has more than 1 GB of space.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(hRdmsetup create vdo0 --table \ "0 2097152 vdo V4 /dev/dm-1 262144 4096 32768 16380"h]hRdmsetup create vdo0 --table \ "0 2097152 vdo V4 /dev/dm-1 262144 4096 32768 16380"}hjsbah}(h]h ]h"]h$]h&]hhuh1jhhhKhjhhubh)}(hGrow the logical size to 4 GB.h]hGrow the logical size to 4 GB.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(hfdmsetup reload vdo0 --table \ "0 8388608 vdo V4 /dev/dm-1 262144 4096 32768 16380" dmsetup resume vdo0h]hfdmsetup reload vdo0 --table \ "0 8388608 vdo V4 /dev/dm-1 262144 4096 32768 16380" dmsetup resume vdo0}hjsbah}(h]h ]h"]h$]h&]hhuh1jhhhKhjhhubh)}(hGrow the physical size to 2 GB.h]hGrow the physical size to 2 GB.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(hfdmsetup reload vdo0 --table \ "0 8388608 vdo V4 /dev/dm-1 524288 4096 32768 16380" dmsetup resume vdo0h]hfdmsetup reload vdo0 --table \ "0 8388608 vdo V4 /dev/dm-1 524288 4096 32768 16380" dmsetup resume vdo0}hjsbah}(h]h ]h"]h$]h&]hhuh1jhhhKhjhhubh)}(hEGrow the physical size by 1 GB more and increase max discard sectors.h]hEGrow the physical size by 1 GB more and increase max discard sectors.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(htdmsetup reload vdo0 --table \ "0 10485760 vdo V4 /dev/dm-1 786432 4096 32768 16380 maxDiscard 8" dmsetup resume vdo0h]htdmsetup reload vdo0 --table \ "0 10485760 vdo V4 /dev/dm-1 786432 4096 32768 16380 maxDiscard 8" dmsetup resume vdo0}hj#sbah}(h]h ]h"]h$]h&]hhuh1jhhhKhjhhubh)}(hStop the vdo volume.h]hStop the vdo volume.}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(hdmsetup remove vdo0h]hdmsetup remove vdo0}hj?sbah}(h]h ]h"]h$]h&]hhuh1jhhhKhjhhubh)}(h~Start the vdo volume again. Note that the logical and physical device sizes must still match, but other parameters can change.h]h~Start the vdo volume again. Note that the logical and physical device sizes must still match, but other parameters can change.}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(hmdmsetup create vdo1 --table \ "0 10485760 vdo V4 /dev/dm-1 786432 512 65550 5000 hash 1 logical 3 physical 2"h]hmdmsetup create vdo1 --table \ "0 10485760 vdo V4 /dev/dm-1 786432 512 65550 5000 hash 1 logical 3 physical 2"}hj[sbah}(h]h ]h"]h$]h&]hhuh1jhhhKhjhhubeh}(h]device-modificationah ]h"]device modificationah$]h&]uh1hhj}hhhhhKubh)}(hhh](h)}(hMessagesh]hMessages}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhjqhhhhhKubh)}(h,All vdo devices accept messages in the form:h]h,All vdo devices accept messages in the form:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjqhhubj)}(hCdmsetup message 0 h]hCdmsetup message 0 }hjsbah}(h]h ]h"]h$]h&]hhuh1jhhhKhjqhhubh)}(hThe messages are:h]hThe messages are:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjqhhubj)}(hXstats: Outputs the current view of the vdo statistics. Mostly used by the vdostats userspace program to interpret the output buffer. config: Outputs useful vdo configuration information. Mostly used by users who want to recreate a similar VDO volume and want to know the creation configuration used. dump: Dumps many internal structures to the system log. This is not always safe to run, so it should only be used to debug a hung vdo. Optional parameters to specify structures to dump are: viopool: The pool of I/O requests incoming bios pools: A synonym of 'viopool' vdo: Most of the structures managing on-disk data queues: Basic information about each vdo thread threads: A synonym of 'queues' default: Equivalent to 'queues vdo' all: All of the above. dump-on-shutdown: Perform a default dump next time vdo shuts down. h]j)}(hhh](j)}(hstats: Outputs the current view of the vdo statistics. Mostly used by the vdostats userspace program to interpret the output buffer. h](j)}(hstats:h]hstats:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hhh]h)}(h}Outputs the current view of the vdo statistics. Mostly used by the vdostats userspace program to interpret the output buffer.h]h}Outputs the current view of the vdo statistics. Mostly used by the vdostats userspace program to interpret the output buffer.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjubj)}(hconfig: Outputs useful vdo configuration information. Mostly used by users who want to recreate a similar VDO volume and want to know the creation configuration used. h](j)}(hconfig:h]hconfig:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhMhjubj)}(hhh]h)}(hOutputs useful vdo configuration information. Mostly used by users who want to recreate a similar VDO volume and want to know the creation configuration used.h]hOutputs useful vdo configuration information. Mostly used by users who want to recreate a similar VDO volume and want to know the creation configuration used.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhMhjubj)}(hXdump: Dumps many internal structures to the system log. This is not always safe to run, so it should only be used to debug a hung vdo. Optional parameters to specify structures to dump are: viopool: The pool of I/O requests incoming bios pools: A synonym of 'viopool' vdo: Most of the structures managing on-disk data queues: Basic information about each vdo thread threads: A synonym of 'queues' default: Equivalent to 'queues vdo' all: All of the above. h](j)}(hdump:h]hdump:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhMhjubj)}(hhh](h)}(hDumps many internal structures to the system log. This is not always safe to run, so it should only be used to debug a hung vdo. Optional parameters to specify structures to dump are:h]hDumps many internal structures to the system log. This is not always safe to run, so it should only be used to debug a hung vdo. Optional parameters to specify structures to dump are:}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj#ubj)}(hX viopool: The pool of I/O requests incoming bios pools: A synonym of 'viopool' vdo: Most of the structures managing on-disk data queues: Basic information about each vdo thread threads: A synonym of 'queues' default: Equivalent to 'queues vdo' all: All of the above. h]h)}(hX viopool: The pool of I/O requests incoming bios pools: A synonym of 'viopool' vdo: Most of the structures managing on-disk data queues: Basic information about each vdo thread threads: A synonym of 'queues' default: Equivalent to 'queues vdo' all: All of the above.h]hXviopool: The pool of I/O requests incoming bios pools: A synonym of ‘viopool’ vdo: Most of the structures managing on-disk data queues: Basic information about each vdo thread threads: A synonym of ‘queues’ default: Equivalent to ‘queues vdo’ all: All of the above.}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj4ubah}(h]h ]h"]h$]h&]uh1jhhhM hj#ubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhMhjubj)}(hDdump-on-shutdown: Perform a default dump next time vdo shuts down. h](j)}(hdump-on-shutdown:h]hdump-on-shutdown:}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhMhjXubj)}(hhh]h)}(h0Perform a default dump next time vdo shuts down.h]h0Perform a default dump next time vdo shuts down.}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjjubah}(h]h ]h"]h$]h&]uh1jhjXubeh}(h]h ]h"]h$]h&]uh1jhhhMhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhhhKhjqhhubeh}(h]messagesah ]h"]messagesah$]h&]uh1hhj}hhhhhKubh)}(hhh](h)}(hStatush]hStatus}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubj)}(hX device: The name of the vdo volume. operating mode: The current operating mode of the vdo volume; values may be 'normal', 'recovering' (the volume has detected an issue with its metadata and is attempting to repair itself), and 'read-only' (an error has occurred that forces the vdo volume to only support read operations and not writes). in recovery: Whether the vdo volume is currently in recovery mode; values may be 'recovering' or '-' which indicates not recovering. index state: The current state of the deduplication index in the vdo volume; values may be 'closed', 'closing', 'error', 'offline', 'online', 'opening', and 'unknown'. compression state: The current state of compression in the vdo volume; values may be 'offline' and 'online'. used physical blocks: The number of physical blocks in use by the vdo volume. total physical blocks: The total number of physical blocks the vdo volume may use; the difference between this value and the is the number of blocks the vdo volume has left before being full.h]hX device: The name of the vdo volume. operating mode: The current operating mode of the vdo volume; values may be 'normal', 'recovering' (the volume has detected an issue with its metadata and is attempting to repair itself), and 'read-only' (an error has occurred that forces the vdo volume to only support read operations and not writes). in recovery: Whether the vdo volume is currently in recovery mode; values may be 'recovering' or '-' which indicates not recovering. index state: The current state of the deduplication index in the vdo volume; values may be 'closed', 'closing', 'error', 'offline', 'online', 'opening', and 'unknown'. compression state: The current state of compression in the vdo volume; values may be 'offline' and 'online'. used physical blocks: The number of physical blocks in use by the vdo volume. total physical blocks: The total number of physical blocks the vdo volume may use; the difference between this value and the is the number of blocks the vdo volume has left before being full.}hjsbah}(h]h ]h"]h$]h&]hhuh1jhhhMhjhhubeh}(h]statusah ]h"]statusah$]h&]uh1hhj}hhhhhMubeh}(h]target-interfaceah ]h"]target interfaceah$]h&]uh1hhhhhhhhK1ubh)}(hhh](h)}(hMemory Requirementsh]hMemory Requirements}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM?ubh)}(hgA vdo target requires a fixed 38 MB of RAM along with the following amounts that scale with the target:h]hgA vdo target requires a fixed 38 MB of RAM along with the following amounts that scale with the target:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhjhhubh bullet_list)}(hhh](h list_item)}(hr1.15 MB of RAM for each 1 MB of configured block map cache size. The block map cache requires a minimum of 150 MB.h]h)}(hr1.15 MB of RAM for each 1 MB of configured block map cache size. The block map cache requires a minimum of 150 MB.h]hr1.15 MB of RAM for each 1 MB of configured block map cache size. The block map cache requires a minimum of 150 MB.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMDhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(h-1.6 MB of RAM for each 1 TB of logical space.h]h)}(hj h]h-1.6 MB of RAM for each 1 TB of logical space.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hG268 MB of RAM for each 1 TB of physical storage managed by the volume. h]h)}(hF268 MB of RAM for each 1 TB of physical storage managed by the volume.h]hF268 MB of RAM for each 1 TB of physical storage managed by the volume.}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]bullet-uh1jhhhMDhjhhubh)}(hXMThe deduplication index requires additional memory which scales with the size of the deduplication window. For dense indexes, the index requires 1 GB of RAM per 1 TB of window. For sparse indexes, the index requires 1 GB of RAM per 10 TB of window. The index configuration is set when the target is formatted and may not be modified.h]hXMThe deduplication index requires additional memory which scales with the size of the deduplication window. For dense indexes, the index requires 1 GB of RAM per 1 TB of window. For sparse indexes, the index requires 1 GB of RAM per 10 TB of window. The index configuration is set when the target is formatted and may not be modified.}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhjhhubeh}(h]memory-requirementsah ]h"]memory requirementsah$]h&]uh1hhhhhhhhM?ubh)}(hhh](h)}(hModule Parametersh]hModule Parameters}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjUhhhhhMPubh)}(hThe vdo driver has a numeric parameter 'log_level' which controls the verbosity of logging from the driver. The default setting is 6 (LOGLEVEL_INFO and more severe messages).h]hThe vdo driver has a numeric parameter ‘log_level’ which controls the verbosity of logging from the driver. The default setting is 6 (LOGLEVEL_INFO and more severe messages).}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMRhjUhhubeh}(h]module-parametersah ]h"]module parametersah$]h&]uh1hhhhhhhhMPubh)}(hhh](h)}(hRun-time Usageh]hRun-time Usage}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj|hhhhhMWubh)}(htWhen using dm-vdo, it is important to be aware of the ways in which its behavior differs from other storage targets.h]htWhen using dm-vdo, it is important to be aware of the ways in which its behavior differs from other storage targets.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMYhj|hhubj)}(hhh](j)}(hThere is no guarantee that over-writes of existing blocks will succeed. Because the underlying storage may be multiply referenced, over-writing an existing block generally requires a vdo to have a free block available. h]h)}(hThere is no guarantee that over-writes of existing blocks will succeed. Because the underlying storage may be multiply referenced, over-writing an existing block generally requires a vdo to have a free block available.h]hThere is no guarantee that over-writes of existing blocks will succeed. Because the underlying storage may be multiply referenced, over-writing an existing block generally requires a vdo to have a free block available.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM\hjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hXxWhen blocks are no longer in use, sending a discard request for those blocks lets the vdo release references for those blocks. If the vdo is thinly provisioned, discarding unused blocks is essential to prevent the target from running out of space. However, due to the sharing of duplicate blocks, no discard request for any given logical block is guaranteed to reclaim space. h]h)}(hXwWhen blocks are no longer in use, sending a discard request for those blocks lets the vdo release references for those blocks. If the vdo is thinly provisioned, discarding unused blocks is essential to prevent the target from running out of space. However, due to the sharing of duplicate blocks, no discard request for any given logical block is guaranteed to reclaim space.h]hXwWhen blocks are no longer in use, sending a discard request for those blocks lets the vdo release references for those blocks. If the vdo is thinly provisioned, discarding unused blocks is essential to prevent the target from running out of space. However, due to the sharing of duplicate blocks, no discard request for any given logical block is guaranteed to reclaim space.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMahjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hAssuming the underlying storage properly implements flush requests, vdo is resilient against crashes, however, unflushed writes may or may not persist after a crash. h]h)}(hAssuming the underlying storage properly implements flush requests, vdo is resilient against crashes, however, unflushed writes may or may not persist after a crash.h]hAssuming the underlying storage properly implements flush requests, vdo is resilient against crashes, however, unflushed writes may or may not persist after a crash.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hEach write to a vdo target entails a significant amount of processing. However, much of the work is paralellizable. Therefore, vdo targets achieve better throughput at higher I/O depths, and can support up 2048 requests in parallel. h]h)}(hEach write to a vdo target entails a significant amount of processing. However, much of the work is paralellizable. Therefore, vdo targets achieve better throughput at higher I/O depths, and can support up 2048 requests in parallel.h]hEach write to a vdo target entails a significant amount of processing. However, much of the work is paralellizable. Therefore, vdo targets achieve better throughput at higher I/O depths, and can support up 2048 requests in parallel.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMlhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]j=j>uh1jhhhM\hj|hhubeh}(h]run-time-usageah ]h"]run-time usageah$]h&]uh1hhhhhhhhMWubh)}(hhh](h)}(hTuningh]hTuning}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMrubh)}(hXThe vdo device has many options, and it can be difficult to make optimal choices without perfect knowledge of the workload. Additionally, most configuration options must be set when a vdo target is started, and cannot be changed without shutting it down completely; the configuration cannot be changed while the target is active. Ideally, tuning with simulated workloads should be performed before deploying vdo in production environments.h]hXThe vdo device has many options, and it can be difficult to make optimal choices without perfect knowledge of the workload. Additionally, most configuration options must be set when a vdo target is started, and cannot be changed without shutting it down completely; the configuration cannot be changed while the target is active. Ideally, tuning with simulated workloads should be performed before deploying vdo in production environments.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMthj hhubh)}(hXThe most important value to adjust is the block map cache size. In order to service a request for any logical address, a vdo must load the portion of the block map which holds the relevant mapping. These mappings are cached. Performance will suffer when the working set does not fit in the cache. By default, a vdo allocates 128 MB of metadata cache in RAM to support efficient access to 100 GB of logical space at a time. It should be scaled up proportionally for larger working sets.h]hXThe most important value to adjust is the block map cache size. In order to service a request for any logical address, a vdo must load the portion of the block map which holds the relevant mapping. These mappings are cached. Performance will suffer when the working set does not fit in the cache. By default, a vdo allocates 128 MB of metadata cache in RAM to support efficient access to 100 GB of logical space at a time. It should be scaled up proportionally for larger working sets.}(hj+ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM|hj hhubh)}(hXThe logical and physical thread counts should also be adjusted. A logical thread controls a disjoint section of the block map, so additional logical threads increase parallelism and can increase throughput. Physical threads control a disjoint section of the data blocks, so additional physical threads can also increase throughput. However, excess threads can waste resources and increase contention.h]hXThe logical and physical thread counts should also be adjusted. A logical thread controls a disjoint section of the block map, so additional logical threads increase parallelism and can increase throughput. Physical threads control a disjoint section of the data blocks, so additional physical threads can also increase throughput. However, excess threads can waste resources and increase contention.}(hj9 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hX Bio submission threads control the parallelism involved in sending I/O to the underlying storage; fewer threads mean there is more opportunity to reorder I/O requests for performance benefit, but also that each I/O request has to wait longer before being submitted.h]hX Bio submission threads control the parallelism involved in sending I/O to the underlying storage; fewer threads mean there is more opportunity to reorder I/O requests for performance benefit, but also that each I/O request has to wait longer before being submitted.}(hjG hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hXDBio acknowledgment threads are used for finishing I/O requests. This is done on dedicated threads since the amount of work required to execute a bio's callback can not be controlled by the vdo itself. Usually one thread is sufficient but additional threads may be beneficial, particularly when bios have CPU-heavy callbacks.h]hXFBio acknowledgment threads are used for finishing I/O requests. This is done on dedicated threads since the amount of work required to execute a bio’s callback can not be controlled by the vdo itself. Usually one thread is sufficient but additional threads may be beneficial, particularly when bios have CPU-heavy callbacks.}(hjU hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hCPU threads are used for hashing and for compression; in workloads with compression enabled, more threads may result in higher throughput.h]hCPU threads are used for hashing and for compression; in workloads with compression enabled, more threads may result in higher throughput.}(hjc hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hHash threads are used to sort active requests by hash and determine whether they should deduplicate; the most CPU intensive actions done by these threads are comparison of 4096-byte data blocks. In most cases, a single hash thread is sufficient.h]hHash threads are used to sort active requests by hash and determine whether they should deduplicate; the most CPU intensive actions done by these threads are comparison of 4096-byte data blocks. In most cases, a single hash thread is sufficient.}(hjq hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubeh}(h]tuningah ]h"]tuningah$]h&]uh1hhhhhhhhMrubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerj error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(hhj;j8jzjwjjj(j%jjjnjkjjjjjRjOjyjvj j j j u nametypes}(hމj;jzjj(jjnjjjRjyj j uh}(hhj8hjwj>jj}j%jjj+jkjjjqjjjOjjvjUj j|j j u footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.