€•â:Œsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ./translations/zh_CN/admin-guide/nvme-multipath”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ./translations/zh_TW/admin-guide/nvme-multipath”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ./translations/it_IT/admin-guide/nvme-multipath”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ./translations/ja_JP/admin-guide/nvme-multipath”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ./translations/ko_KR/admin-guide/nvme-multipath”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒPortuguese (Brazilian)”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ./translations/pt_BR/admin-guide/nvme-multipath”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh–sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ./translations/sp_SP/admin-guide/nvme-multipath”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒcomment”“”)”}”(hŒ SPDX-License-Identifier: GPL-2.0”h]”hŒ SPDX-License-Identifier: GPL-2.0”…””}”hh·sbah}”(h]”h ]”h"]”h$]”h&]”Œ xml:space”Œpreserve”uh1hµhhh²hh³ŒH/var/lib/git/docbuild/linux/Documentation/admin-guide/nvme-multipath.rst”h´KubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒLinux NVMe multipath”h]”hŒLinux NVMe multipath”…””}”(hhÏh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhhÊh²hh³hÇh´KubhŒ paragraph”“”)”}”(hŒoThis document describes NVMe multipath and its path selection policies supported by the Linux NVMe host driver.”h]”hŒoThis document describes NVMe multipath and its path selection policies supported by the Linux NVMe host driver.”…””}”(hhßh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KhhÊh²hubhÉ)”}”(hhh]”(hÎ)”}”(hŒ Introduction”h]”hŒ Introduction”…””}”(hhðh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhhíh²hh³hÇh´K ubhÞ)”}”(hXµThe NVMe multipath feature in Linux integrates namespaces with the same identifier into a single block device. Using multipath enhances the reliability and stability of I/O access while improving bandwidth performance. When a user sends I/O to this merged block device, the multipath mechanism selects one of the underlying block devices (paths) according to the configured policy. Different policies result in different path selections.”h]”hXµThe NVMe multipath feature in Linux integrates namespaces with the same identifier into a single block device. Using multipath enhances the reliability and stability of I/O access while improving bandwidth performance. When a user sends I/O to this merged block device, the multipath mechanism selects one of the underlying block devices (paths) according to the configured policy. Different policies result in different path selections.”…””}”(hhþh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Khhíh²hubeh}”(h]”Œ introduction”ah ]”h"]”Œ introduction”ah$]”h&]”uh1hÈhhÊh²hh³hÇh´K ubhÉ)”}”(hhh]”(hÎ)”}”(hŒPolicies”h]”hŒPolicies”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjh²hh³hÇh´KubhÞ)”}”(hŒúAll policies follow the ANA (Asymmetric Namespace Access) mechanism, meaning that when an optimized path is available, it will be chosen over a non-optimized one. Current the NVMe multipath policies include numa(default), round-robin and queue-depth.”h]”hŒúAll policies follow the ANA (Asymmetric Namespace Access) mechanism, meaning that when an optimized path is available, it will be chosen over a non-optimized one. Current the NVMe multipath policies include numa(default), round-robin and queue-depth.”…””}”(hj%h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´Khjh²hubhŒdefinition_list”“”)”}”(hhh]”hŒdefinition_list_item”“”)”}”(hŒÒTo set the desired policy (e.g., round-robin), use one of the following methods: 1. echo -n "round-robin" > /sys/module/nvme_core/parameters/iopolicy 2. or add the "nvme_core.iopolicy=round-robin" to cmdline. ”h]”(hŒterm”“”)”}”(hŒPTo set the desired policy (e.g., round-robin), use one of the following methods:”h]”hŒPTo set the desired policy (e.g., round-robin), use one of the following methods:”…””}”(hj@h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1j>h³hÇh´K!hj:ubhŒ definition”“”)”}”(hhh]”hŒenumerated_list”“”)”}”(hhh]”(hŒ list_item”“”)”}”(hŒAecho -n "round-robin" > /sys/module/nvme_core/parameters/iopolicy”h]”hÞ)”}”(hj\h]”hŒEecho -n “round-robin†> /sys/module/nvme_core/parameters/iopolicy”…””}”(hj^h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KhjZubah}”(h]”h ]”h"]”h$]”h&]”uh1jXhjUubjY)”}”(hŒ9or add the "nvme_core.iopolicy=round-robin" to cmdline. ”h]”hÞ)”}”(hŒ7or add the "nvme_core.iopolicy=round-robin" to cmdline.”h]”hŒ;or add the “nvme_core.iopolicy=round-robin†to cmdline.”…””}”(hjuh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K hjqubah}”(h]”h ]”h"]”h$]”h&]”uh1jXhjUubeh}”(h]”h ]”h"]”h$]”h&]”Œenumtype”Œarabic”Œprefix”hŒsuffix”Œ.”uh1jShjPubah}”(h]”h ]”h"]”h$]”h&]”uh1jNhj:ubeh}”(h]”h ]”h"]”h$]”h&]”uh1j8h³hÇh´K!hj5ubah}”(h]”h ]”h"]”h$]”h&]”uh1j3hjh²hh³Nh´NubhÉ)”}”(hhh]”(hÎ)”}”(hŒNUMA”h]”hŒNUMA”…””}”(hj©h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj¦h²hh³hÇh´K$ubhÞ)”}”(hŒÄThe NUMA policy selects the path closest to the NUMA node of the current CPU for I/O distribution. This policy maintains the nearest paths to each NUMA node based on network interface connections.”h]”hŒÄThe NUMA policy selects the path closest to the NUMA node of the current CPU for I/O distribution. This policy maintains the nearest paths to each NUMA node based on network interface connections.”…””}”(hj·h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K&hj¦h²hubj4)”}”(hhh]”j9)”}”(hXWhen to use the NUMA policy: 1. Multi-core Systems: Optimizes memory access in multi-core and multi-processor systems, especially under NUMA architecture. 2. High Affinity Workloads: Binds I/O processing to the CPU to reduce communication and data transfer delays across nodes. ”h]”(j?)”}”(hŒWhen to use the NUMA policy:”h]”hŒWhen to use the NUMA policy:”…””}”(hjÌh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1j>h³hÇh´K/hjÈubjO)”}”(hhh]”jT)”}”(hhh]”(jY)”}”(hŒzMulti-core Systems: Optimizes memory access in multi-core and multi-processor systems, especially under NUMA architecture.”h]”hÞ)”}”(hŒzMulti-core Systems: Optimizes memory access in multi-core and multi-processor systems, especially under NUMA architecture.”h]”hŒzMulti-core Systems: Optimizes memory access in multi-core and multi-processor systems, especially under NUMA architecture.”…””}”(hjäh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K+hjàubah}”(h]”h ]”h"]”h$]”h&]”uh1jXhjÝubjY)”}”(hŒyHigh Affinity Workloads: Binds I/O processing to the CPU to reduce communication and data transfer delays across nodes. ”h]”hÞ)”}”(hŒwHigh Affinity Workloads: Binds I/O processing to the CPU to reduce communication and data transfer delays across nodes.”h]”hŒwHigh Affinity Workloads: Binds I/O processing to the CPU to reduce communication and data transfer delays across nodes.”…””}”(hjüh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K-hjøubah}”(h]”h ]”h"]”h$]”h&]”uh1jXhjÝubeh}”(h]”h ]”h"]”h$]”h&]”jjj‘hj’j“uh1jShjÚubah}”(h]”h ]”h"]”h$]”h&]”uh1jNhjÈubeh}”(h]”h ]”h"]”h$]”h&]”uh1j8h³hÇh´K/hjÅubah}”(h]”h ]”h"]”h$]”h&]”uh1j3hj¦h²hh³Nh´Nubeh}”(h]”Œnuma”ah ]”h"]”Œnuma”ah$]”h&]”uh1hÈhjh²hh³hÇh´K$ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ Round-Robin”h]”hŒ Round-Robin”…””}”(hj3h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhj0h²hh³hÇh´K2ubhÞ)”}”(hŒ°The round-robin policy distributes I/O requests evenly across all paths to enhance throughput and resource utilization. Each I/O operation is sent to the next path in sequence.”h]”hŒ°The round-robin policy distributes I/O requests evenly across all paths to enhance throughput and resource utilization. Each I/O operation is sent to the next path in sequence.”…””}”(hjAh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K4hj0h²hubj4)”}”(hhh]”j9)”}”(hXWhen to use the round-robin policy: 1. Balanced Workloads: Effective for balanced and predictable workloads with similar I/O size and type. 2. Homogeneous Path Performance: Utilizes all paths efficiently when performance characteristics (e.g., latency, bandwidth) are similar. ”h]”(j?)”}”(hŒ#When to use the round-robin policy:”h]”hŒ#When to use the round-robin policy:”…””}”(hjVh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1j>h³hÇh´K=hjRubjO)”}”(hhh]”jT)”}”(hhh]”(jY)”}”(hŒdBalanced Workloads: Effective for balanced and predictable workloads with similar I/O size and type.”h]”hÞ)”}”(hŒdBalanced Workloads: Effective for balanced and predictable workloads with similar I/O size and type.”h]”hŒdBalanced Workloads: Effective for balanced and predictable workloads with similar I/O size and type.”…””}”(hjnh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K9hjjubah}”(h]”h ]”h"]”h$]”h&]”uh1jXhjgubjY)”}”(hŒ‡Homogeneous Path Performance: Utilizes all paths efficiently when performance characteristics (e.g., latency, bandwidth) are similar. ”h]”hÞ)”}”(hŒ…Homogeneous Path Performance: Utilizes all paths efficiently when performance characteristics (e.g., latency, bandwidth) are similar.”h]”hŒ…Homogeneous Path Performance: Utilizes all paths efficiently when performance characteristics (e.g., latency, bandwidth) are similar.”…””}”(hj†h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´K;hj‚ubah}”(h]”h ]”h"]”h$]”h&]”uh1jXhjgubeh}”(h]”h ]”h"]”h$]”h&]”jjj‘hj’j“uh1jShjdubah}”(h]”h ]”h"]”h$]”h&]”uh1jNhjRubeh}”(h]”h ]”h"]”h$]”h&]”uh1j8h³hÇh´K=hjOubah}”(h]”h ]”h"]”h$]”h&]”uh1j3hj0h²hh³Nh´Nubeh}”(h]”Œ round-robin”ah ]”h"]”Œ round-robin”ah$]”h&]”uh1hÈhjh²hh³hÇh´K2ubhÉ)”}”(hhh]”(hÎ)”}”(hŒ Queue-Depth”h]”hŒ Queue-Depth”…””}”(hj½h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÍhjºh²hh³hÇh´K@ubhÞ)”}”(hŒ–The queue-depth policy manages I/O requests based on the current queue depth of each path, selecting the path with the least number of in-flight I/Os.”h]”hŒ–The queue-depth policy manages I/O requests based on the current queue depth of each path, selecting the path with the least number of in-flight I/Os.”…””}”(hjËh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KBhjºh²hubj4)”}”(hhh]”j9)”}”(hŒËWhen to use the queue-depth policy: 1. High load with small I/Os: Effectively balances load across paths when the load is high, and I/O operations consist of small, relatively fixed-sized requests.”h]”(j?)”}”(hŒ#When to use the queue-depth policy:”h]”hŒ#When to use the queue-depth policy:”…””}”(hjàh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1j>h³hÇh´KGhjÜubjO)”}”(hhh]”jT)”}”(hhh]”jY)”}”(hŒžHigh load with small I/Os: Effectively balances load across paths when the load is high, and I/O operations consist of small, relatively fixed-sized requests.”h]”hÞ)”}”(hŒžHigh load with small I/Os: Effectively balances load across paths when the load is high, and I/O operations consist of small, relatively fixed-sized requests.”h]”hŒžHigh load with small I/Os: Effectively balances load across paths when the load is high, and I/O operations consist of small, relatively fixed-sized requests.”…””}”(hjøh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÝh³hÇh´KFhjôubah}”(h]”h ]”h"]”h$]”h&]”uh1jXhjñubah}”(h]”h ]”h"]”h$]”h&]”jjj‘hj’j“uh1jShjîubah}”(h]”h ]”h"]”h$]”h&]”uh1jNhjÜubeh}”(h]”h ]”h"]”h$]”h&]”uh1j8h³hÇh´KGhjÙubah}”(h]”h ]”h"]”h$]”h&]”uh1j3hjºh²hh³Nh´Nubeh}”(h]”Œ queue-depth”ah ]”h"]”Œ queue-depth”ah$]”h&]”uh1hÈhjh²hh³hÇh´K@ubeh}”(h]”Œpolicies”ah ]”h"]”Œpolicies”ah$]”h&]”uh1hÈhhÊh²hh³hÇh´Kubeh}”(h]”Œlinux-nvme-multipath”ah ]”h"]”Œlinux nvme multipath”ah$]”h&]”uh1hÈhhh²hh³hÇh´Kubeh}”(h]”h ]”h"]”h$]”h&]”Œsource”hÇuh1hŒcurrent_source”NŒ current_line”NŒsettings”Œdocutils.frontend”ŒValues”“”)”}”(hÍNŒ generator”NŒ datestamp”NŒ source_link”NŒ source_url”NŒ toc_backlinks”Œentry”Œfootnote_backlinks”KŒ sectnum_xform”KŒstrip_comments”NŒstrip_elements_with_classes”NŒ strip_classes”NŒ report_level”KŒ halt_level”KŒexit_status_level”KŒdebug”NŒwarning_stream”NŒ traceback”ˆŒinput_encoding”Œ utf-8-sig”Œinput_encoding_error_handler”Œstrict”Œoutput_encoding”Œutf-8”Œoutput_encoding_error_handler”j_Œerror_encoding”Œutf-8”Œerror_encoding_error_handler”Œbackslashreplace”Œ language_code”Œen”Œrecord_dependencies”NŒconfig”NŒ id_prefix”hŒauto_id_prefix”Œid”Œ dump_settings”NŒdump_internals”NŒdump_transforms”NŒdump_pseudo_xml”NŒexpose_internals”NŒstrict_visitor”NŒ_disable_config”NŒ_source”hÇŒ _destination”NŒ _config_files”]”Œ7/var/lib/git/docbuild/linux/Documentation/docutils.conf”aŒfile_insertion_enabled”ˆŒ raw_enabled”KŒline_length_limit”M'Œpep_references”NŒ pep_base_url”Œhttps://peps.python.org/”Œpep_file_url_template”Œpep-%04d”Œrfc_references”NŒ rfc_base_url”Œ&https://datatracker.ietf.org/doc/html/”Œ tab_width”KŒtrim_footnote_reference_space”‰Œsyntax_highlight”Œlong”Œ smart_quotes”ˆŒsmartquotes_locales”]”Œcharacter_level_inline_markup”‰Œdoctitle_xform”‰Œ docinfo_xform”KŒsectsubtitle_xform”‰Œ image_loading”Œlink”Œembed_stylesheet”‰Œcloak_email_addresses”ˆŒsection_self_link”‰Œenv”NubŒreporter”NŒindirect_targets”]”Œsubstitution_defs”}”Œsubstitution_names”}”Œrefnames”}”Œrefids”}”Œnameids”}”(j9j6jjj1j.j-j*j·j´j)j&uŒ nametypes”}”(j9‰j‰j1‰j-‰j·‰j)‰uh}”(j6hÊjhíj.jj*j¦j´j0j&jºuŒ footnote_refs”}”Œ citation_refs”}”Œ autofootnotes”]”Œautofootnote_refs”]”Œsymbol_footnotes”]”Œsymbol_footnote_refs”]”Œ footnotes”]”Œ citations”]”Œautofootnote_start”KŒsymbol_footnote_start”KŒ id_counter”Œ collections”ŒCounter”“”}”…”R”Œparse_messages”]”Œtransform_messages”]”Œ transformer”NŒ include_log”]”Œ decoration”Nh²hub.