€•n8Œsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ0/translations/zh_CN/driver-api/mmc/mmc-async-req”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ0/translations/zh_TW/driver-api/mmc/mmc-async-req”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ0/translations/it_IT/driver-api/mmc/mmc-async-req”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ0/translations/ja_JP/driver-api/mmc/mmc-async-req”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ0/translations/ko_KR/driver-api/mmc/mmc-async-req”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ0/translations/sp_SP/driver-api/mmc/mmc-async-req”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒMMC Asynchronous Request”h]”hŒMMC Asynchronous Request”…””}”(hh¨hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hh£hžhhŸŒJ/var/lib/git/docbuild/linux/Documentation/driver-api/mmc/mmc-async-req.rst”h Kubh¢)”}”(hhh]”(h§)”}”(hŒ Rationale”h]”hŒ Rationale”…””}”(hhºhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hh·hžhhŸh¶h KubhŒ paragraph”“”)”}”(hŒ2How significant is the cache maintenance overhead?”h]”hŒ2How significant is the cache maintenance overhead?”…””}”(hhÊhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h Khh·hžhubhÉ)”}”(hX!It depends. Fast eMMC and multiple cache levels with speculative cache pre-fetch makes the cache overhead relatively significant. If the DMA preparations for the next request are done in parallel with the current transfer, the DMA preparation overhead would not affect the MMC performance.”h]”hX!It depends. Fast eMMC and multiple cache levels with speculative cache pre-fetch makes the cache overhead relatively significant. If the DMA preparations for the next request are done in parallel with the current transfer, the DMA preparation overhead would not affect the MMC performance.”…””}”(hhØhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h K hh·hžhubhÉ)”}”(hŒ’The intention of non-blocking (asynchronous) MMC requests is to minimize the time between when an MMC request ends and another MMC request begins.”h]”hŒ’The intention of non-blocking (asynchronous) MMC requests is to minimize the time between when an MMC request ends and another MMC request begins.”…””}”(hhæhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h Khh·hžhubhÉ)”}”(hŒãUsing mmc_wait_for_req(), the MMC controller is idle while dma_map_sg and dma_unmap_sg are processing. Using non-blocking MMC requests makes it possible to prepare the caches for next job in parallel with an active MMC request.”h]”hŒãUsing mmc_wait_for_req(), the MMC controller is idle while dma_map_sg and dma_unmap_sg are processing. Using non-blocking MMC requests makes it possible to prepare the caches for next job in parallel with an active MMC request.”…””}”(hhôhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h Khh·hžhubeh}”(h]”Œ rationale”ah ]”h"]”Œ rationale”ah$]”h&]”uh1h¡hh£hžhhŸh¶h Kubh¢)”}”(hhh]”(h§)”}”(hŒMMC block driver”h]”hŒMMC block driver”…””}”(hj hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hj hžhhŸh¶h KubhÉ)”}”(hŒGThe mmc_blk_issue_rw_rq() in the MMC block driver is made non-blocking.”h]”hŒGThe mmc_blk_issue_rw_rq() in the MMC block driver is made non-blocking.”…””}”(hjhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h Khj hžhubhÉ)”}”(hX9The increase in throughput is proportional to the time it takes to prepare (major part of preparations are dma_map_sg() and dma_unmap_sg()) a request and how fast the memory is. The faster the MMC/SD is the more significant the prepare request time becomes. Roughly the expected performance gain is 5% for large writes and 10% on large reads on a L2 cache platform. In power save mode, when clocks run on a lower frequency, the DMA preparation may cost even more. As long as these slower preparations are run in parallel with the transfer performance won't be affected.”h]”hX;The increase in throughput is proportional to the time it takes to prepare (major part of preparations are dma_map_sg() and dma_unmap_sg()) a request and how fast the memory is. The faster the MMC/SD is the more significant the prepare request time becomes. Roughly the expected performance gain is 5% for large writes and 10% on large reads on a L2 cache platform. In power save mode, when clocks run on a lower frequency, the DMA preparation may cost even more. As long as these slower preparations are run in parallel with the transfer performance won’t be affected.”…””}”(hj)hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h Khj hžhubeh}”(h]”Œmmc-block-driver”ah ]”h"]”Œmmc block driver”ah$]”h&]”uh1h¡hh£hžhhŸh¶h Kubh¢)”}”(hhh]”(h§)”}”(hŒ0Details on measurements from IOZone and mmc_test”h]”hŒ0Details on measurements from IOZone and mmc_test”…””}”(hjBhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hj?hžhhŸh¶h K&ubhÉ)”}”(hŒKhttps://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req”h]”hŒ reference”“”)”}”(hjRh]”hŒKhttps://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req”…””}”(hjVhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”Œrefuri”jRuh1jThjPubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h K(hj?hžhubeh}”(h]”Œ0details-on-measurements-from-iozone-and-mmc-test”ah ]”h"]”Œ0details on measurements from iozone and mmc_test”ah$]”h&]”uh1h¡hh£hžhhŸh¶h K&ubh¢)”}”(hhh]”(h§)”}”(hŒMMC core API extension”h]”hŒMMC core API extension”…””}”(hjuhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hjrhžhhŸh¶h K+ubhÉ)”}”(hŒ1There is one new public function mmc_start_req().”h]”hŒ1There is one new public function mmc_start_req().”…””}”(hjƒhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h K-hjrhžhubhÉ)”}”(hXJIt starts a new MMC command request for a host. The function isn't truly non-blocking. If there is an ongoing async request it waits for completion of that request and starts the new one and returns. It doesn't wait for the new request to complete. If there is no ongoing request it starts the new request and returns immediately.”h]”hXNIt starts a new MMC command request for a host. The function isn’t truly non-blocking. If there is an ongoing async request it waits for completion of that request and starts the new one and returns. It doesn’t wait for the new request to complete. If there is no ongoing request it starts the new request and returns immediately.”…””}”(hj‘hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h K/hjrhžhubeh}”(h]”Œmmc-core-api-extension”ah ]”h"]”Œmmc core api extension”ah$]”h&]”uh1h¡hh£hžhhŸh¶h K+ubh¢)”}”(hhh]”(h§)”}”(hŒMMC host extensions”h]”hŒMMC host extensions”…””}”(hjªhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1h¦hj§hžhhŸh¶h K6ubhÉ)”}”(hŒÔThere are two optional members in the mmc_host_ops -- pre_req() and post_req() -- that the host driver may implement in order to move work to before and after the actual mmc_host_ops.request() function is called.”h]”hŒÔThere are two optional members in the mmc_host_ops -- pre_req() and post_req() -- that the host driver may implement in order to move work to before and after the actual mmc_host_ops.request() function is called.”…””}”(hj¸hžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h K8hj§hžhubhÉ)”}”(hŒuIn the DMA case pre_req() may do dma_map_sg() and prepare the DMA descriptor, and post_req() runs the dma_unmap_sg().”h]”hŒuIn the DMA case pre_req() may do dma_map_sg() and prepare the DMA descriptor, and post_req() runs the dma_unmap_sg().”…””}”(hjÆhžhhŸNh Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hÈhŸh¶h Ksize > threshold) /* start MMC transfer for the complete transfer size */ mmc_start_command(MMC_CMD_TRANSFER_FULL_SIZE); /* * Begin to prepare DMA while cmd is being processed by MMC. * The first chunk of the request should take the same time * to prepare as the "MMC process command time". * If prepare time exceeds MMC cmd time * the transfer is delayed, guesstimate max 4k as first chunk size. */ prepare_1st_chunk_for_dma(req); /* flush pending desc to the DMAC (dmaengine.h) */ dma_issue_pending(req->dma_desc); prepare_2nd_chunk_for_dma(req); /* * The second issue_pending should be called before MMC runs out * of the first chunk. If the MMC runs out of the first data chunk * before this call, the transfer is delayed. */ dma_issue_pending(req->dma_desc);”h]”hXcif (is_first_req && req->size > threshold) /* start MMC transfer for the complete transfer size */ mmc_start_command(MMC_CMD_TRANSFER_FULL_SIZE); /* * Begin to prepare DMA while cmd is being processed by MMC. * The first chunk of the request should take the same time * to prepare as the "MMC process command time". * If prepare time exceeds MMC cmd time * the transfer is delayed, guesstimate max 4k as first chunk size. */ prepare_1st_chunk_for_dma(req); /* flush pending desc to the DMAC (dmaengine.h) */ dma_issue_pending(req->dma_desc); prepare_2nd_chunk_for_dma(req); /* * The second issue_pending should be called before MMC runs out * of the first chunk. If the MMC runs out of the first data chunk * before this call, the transfer is delayed. */ dma_issue_pending(req->dma_desc);”…””}”hjsbah}”(h]”h ]”h"]”h$]”h&]”Œ xml:space”Œpreserve”uh1jhŸh¶h KMhjÜhžhubeh}”(h]”Œoptimize-for-the-first-request”ah ]”h"]”Œoptimize for the first request”ah$]”h&]”uh1h¡hh£hžhhŸh¶h K@ubeh}”(h]”Œmmc-asynchronous-request”ah ]”h"]”Œmmc asynchronous request”ah$]”h&]”uh1h¡hhhžhhŸh¶h Kubeh}”(h]”h ]”h"]”h$]”h&]”Œsource”h¶uh1hŒcurrent_source”NŒ current_line”NŒsettings”Œdocutils.frontend”ŒValues”“”)”}”(h¦NŒ generator”NŒ datestamp”NŒ source_link”NŒ source_url”NŒ toc_backlinks”Œentry”Œfootnote_backlinks”KŒ sectnum_xform”KŒstrip_comments”NŒstrip_elements_with_classes”NŒ strip_classes”NŒ report_level”KŒ halt_level”KŒexit_status_level”KŒdebug”NŒwarning_stream”NŒ traceback”ˆŒinput_encoding”Œ utf-8-sig”Œinput_encoding_error_handler”Œstrict”Œoutput_encoding”Œutf-8”Œoutput_encoding_error_handler”j\Œerror_encoding”Œutf-8”Œerror_encoding_error_handler”Œbackslashreplace”Œ language_code”Œen”Œrecord_dependencies”NŒconfig”NŒ id_prefix”hŒauto_id_prefix”Œid”Œ dump_settings”NŒdump_internals”NŒdump_transforms”NŒdump_pseudo_xml”NŒexpose_internals”NŒstrict_visitor”NŒ_disable_config”NŒ_source”h¶Œ _destination”NŒ _config_files”]”Œ7/var/lib/git/docbuild/linux/Documentation/docutils.conf”aŒfile_insertion_enabled”ˆŒ raw_enabled”KŒline_length_limit”M'Œpep_references”NŒ pep_base_url”Œhttps://peps.python.org/”Œpep_file_url_template”Œpep-%04d”Œrfc_references”NŒ rfc_base_url”Œ&https://datatracker.ietf.org/doc/html/”Œ tab_width”KŒtrim_footnote_reference_space”‰Œsyntax_highlight”Œlong”Œ smart_quotes”ˆŒsmartquotes_locales”]”Œcharacter_level_inline_markup”‰Œdoctitle_xform”‰Œ docinfo_xform”KŒsectsubtitle_xform”‰Œ image_loading”Œlink”Œembed_stylesheet”‰Œcloak_email_addresses”ˆŒsection_self_link”‰Œenv”NubŒreporter”NŒindirect_targets”]”Œsubstitution_defs”}”Œsubstitution_names”}”Œrefnames”}”Œrefids”}”Œnameids”}”(j6j3jjj<j9jojlj¤j¡jÙjÖj.j+uŒ nametypes”}”(j6‰j‰j<‰jo‰j¤‰jÙ‰j.‰uh}”(j3h£jh·j9j jlj?j¡jrjÖj§j+jÜuŒ footnote_refs”}”Œ citation_refs”}”Œ autofootnotes”]”Œautofootnote_refs”]”Œsymbol_footnotes”]”Œsymbol_footnote_refs”]”Œ footnotes”]”Œ citations”]”Œautofootnote_start”KŒsymbol_footnote_start”KŒ id_counter”Œ collections”ŒCounter”“”}”…”R”Œparse_messages”]”Œtransform_messages”]”Œ transformer”NŒ include_log”]”Œ decoration”Nhžhub.