tsphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget:/translations/zh_CN/arch/arm/stm32/stm32-dma-mdma-chainingmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget:/translations/zh_TW/arch/arm/stm32/stm32-dma-mdma-chainingmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget:/translations/it_IT/arch/arm/stm32/stm32-dma-mdma-chainingmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget:/translations/ja_JP/arch/arm/stm32/stm32-dma-mdma-chainingmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget:/translations/ko_KR/arch/arm/stm32/stm32-dma-mdma-chainingmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget:/translations/sp_SP/arch/arm/stm32/stm32-dma-mdma-chainingmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhcomment)}(h SPDX-License-Identifier: GPL-2.0h]h SPDX-License-Identifier: GPL-2.0}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhhhT/var/lib/git/docbuild/linux/Documentation/arch/arm/stm32/stm32-dma-mdma-chaining.rsthKubhsection)}(hhh](htitle)}(hSTM32 DMA-MDMA chainingh]hSTM32 DMA-MDMA chaining}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Introductionh]h Introduction}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhK ubh block_quote)}(hXThis document describes the STM32 DMA-MDMA chaining feature. But before going further, let's introduce the peripherals involved. To offload data transfers from the CPU, STM32 microprocessors (MPUs) embed direct memory access controllers (DMA). STM32MP1 SoCs embed both STM32 DMA and STM32 MDMA controllers. STM32 DMA request routing capabilities are enhanced by a DMA request multiplexer (STM32 DMAMUX). **STM32 DMAMUX** STM32 DMAMUX routes any DMA request from a given peripheral to any STM32 DMA controller (STM32MP1 counts two STM32 DMA controllers) channels. **STM32 DMA** STM32 DMA is mainly used to implement central data buffer storage (usually in the system SRAM) for different peripheral. It can access external RAMs but without the ability to generate convenient burst transfer ensuring the best load of the AXI. **STM32 MDMA** STM32 MDMA (Master DMA) is mainly used to manage direct data transfers between RAM data buffers without CPU intervention. It can also be used in a hierarchical structure that uses STM32 DMA as first level data buffer interfaces for AHB peripherals, while the STM32 MDMA acts as a second level DMA with better performance. As a AXI/AHB master, STM32 MDMA can take control of the AXI/AHB bus. h](h paragraph)}(hThis document describes the STM32 DMA-MDMA chaining feature. But before going further, let's introduce the peripherals involved.h]hThis document describes the STM32 DMA-MDMA chaining feature. But before going further, let’s introduce the peripherals involved.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hhubh)}(hrTo offload data transfers from the CPU, STM32 microprocessors (MPUs) embed direct memory access controllers (DMA).h]hrTo offload data transfers from the CPU, STM32 microprocessors (MPUs) embed direct memory access controllers (DMA).}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubh)}(hSTM32MP1 SoCs embed both STM32 DMA and STM32 MDMA controllers. STM32 DMA request routing capabilities are enhanced by a DMA request multiplexer (STM32 DMAMUX).h]hSTM32MP1 SoCs embed both STM32 DMA and STM32 MDMA controllers. STM32 DMA request routing capabilities are enhanced by a DMA request multiplexer (STM32 DMAMUX).}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubh)}(h**STM32 DMAMUX**h]hstrong)}(hjh]h STM32 DMAMUX}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhhhKhhubh)}(hSTM32 DMAMUX routes any DMA request from a given peripheral to any STM32 DMA controller (STM32MP1 counts two STM32 DMA controllers) channels.h]hSTM32 DMAMUX routes any DMA request from a given peripheral to any STM32 DMA controller (STM32MP1 counts two STM32 DMA controllers) channels.}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubh)}(h **STM32 DMA**h]j)}(hj5h]h STM32 DMA}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&]uh1hhhhKhhubh)}(hSTM32 DMA is mainly used to implement central data buffer storage (usually in the system SRAM) for different peripheral. It can access external RAMs but without the ability to generate convenient burst transfer ensuring the best load of the AXI.h]hSTM32 DMA is mainly used to implement central data buffer storage (usually in the system SRAM) for different peripheral. It can access external RAMs but without the ability to generate convenient burst transfer ensuring the best load of the AXI.}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubh)}(h**STM32 MDMA**h]j)}(hjZh]h STM32 MDMA}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubah}(h]h ]h"]h$]h&]uh1hhhhK!hhubh)}(hXSTM32 MDMA (Master DMA) is mainly used to manage direct data transfers between RAM data buffers without CPU intervention. It can also be used in a hierarchical structure that uses STM32 DMA as first level data buffer interfaces for AHB peripherals, while the STM32 MDMA acts as a second level DMA with better performance. As a AXI/AHB master, STM32 MDMA can take control of the AXI/AHB bus.h]hXSTM32 MDMA (Master DMA) is mainly used to manage direct data transfers between RAM data buffers without CPU intervention. It can also be used in a hierarchical structure that uses STM32 DMA as first level data buffer interfaces for AHB peripherals, while the STM32 MDMA acts as a second level DMA with better performance. As a AXI/AHB master, STM32 MDMA can take control of the AXI/AHB bus.}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK#hhubeh}(h]h ]h"]h$]h&]uh1hhhhK hhhhubeh}(h] introductionah ]h"] introductionah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(h Principlesh]h Principles}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK,ubh)}(hXTSTM32 DMA-MDMA chaining feature relies on the strengths of STM32 DMA and STM32 MDMA controllers. STM32 DMA has a circular Double Buffer Mode (DBM). At each end of transaction (when DMA data counter - DMA_SxNDTR - reaches 0), the memory pointers (configured with DMA_SxSM0AR and DMA_SxM1AR) are swapped and the DMA data counter is automatically reloaded. This allows the SW or the STM32 MDMA to process one memory area while the second memory area is being filled/used by the STM32 DMA transfer. With STM32 MDMA linked-list mode, a single request initiates the data array (collection of nodes) to be transferred until the linked-list pointer for the channel is null. The channel transfer complete of the last node is the end of transfer, unless first and last nodes are linked to each other, in such a case, the linked-list loops on to create a circular MDMA transfer. STM32 MDMA has direct connections with STM32 DMA. This enables autonomous communication and synchronization between peripherals, thus saving CPU resources and bus congestion. Transfer Complete signal of STM32 DMA channel can triggers STM32 MDMA transfer. STM32 MDMA can clear the request generated by the STM32 DMA by writing to its Interrupt Clear register (whose address is stored in MDMA_CxMAR, and bit mask in MDMA_CxMDR). .. table:: STM32 MDMA interconnect table with STM32 DMA +--------------+----------------+-----------+------------+ | STM32 DMAMUX | STM32 DMA | STM32 DMA | STM32 MDMA | | channels | channels | Transfer | request | | | | complete | | | | | signal | | +==============+================+===========+============+ | Channel *0* | DMA1 channel 0 | dma1_tcf0 | *0x00* | +--------------+----------------+-----------+------------+ | Channel *1* | DMA1 channel 1 | dma1_tcf1 | *0x01* | +--------------+----------------+-----------+------------+ | Channel *2* | DMA1 channel 2 | dma1_tcf2 | *0x02* | +--------------+----------------+-----------+------------+ | Channel *3* | DMA1 channel 3 | dma1_tcf3 | *0x03* | +--------------+----------------+-----------+------------+ | Channel *4* | DMA1 channel 4 | dma1_tcf4 | *0x04* | +--------------+----------------+-----------+------------+ | Channel *5* | DMA1 channel 5 | dma1_tcf5 | *0x05* | +--------------+----------------+-----------+------------+ | Channel *6* | DMA1 channel 6 | dma1_tcf6 | *0x06* | +--------------+----------------+-----------+------------+ | Channel *7* | DMA1 channel 7 | dma1_tcf7 | *0x07* | +--------------+----------------+-----------+------------+ | Channel *8* | DMA2 channel 0 | dma2_tcf0 | *0x08* | +--------------+----------------+-----------+------------+ | Channel *9* | DMA2 channel 1 | dma2_tcf1 | *0x09* | +--------------+----------------+-----------+------------+ | Channel *10* | DMA2 channel 2 | dma2_tcf2 | *0x0A* | +--------------+----------------+-----------+------------+ | Channel *11* | DMA2 channel 3 | dma2_tcf3 | *0x0B* | +--------------+----------------+-----------+------------+ | Channel *12* | DMA2 channel 4 | dma2_tcf4 | *0x0C* | +--------------+----------------+-----------+------------+ | Channel *13* | DMA2 channel 5 | dma2_tcf5 | *0x0D* | +--------------+----------------+-----------+------------+ | Channel *14* | DMA2 channel 6 | dma2_tcf6 | *0x0E* | +--------------+----------------+-----------+------------+ | Channel *15* | DMA2 channel 7 | dma2_tcf7 | *0x0F* | +--------------+----------------+-----------+------------+ STM32 DMA-MDMA chaining feature then uses a SRAM buffer. STM32MP1 SoCs embed three fast access static internal RAMs of various size, used for data storage. Due to STM32 DMA legacy (within microcontrollers), STM32 DMA performances are bad with DDR, while they are optimal with SRAM. Hence the SRAM buffer used between STM32 DMA and STM32 MDMA. This buffer is split in two equal periods and STM32 DMA uses one period while STM32 MDMA uses the other period simultaneously. :: dma[1:2]-tcf[0:7] .----------------. ____________ ' _________ V____________ | STM32 DMA | / __|>_ \ | STM32 MDMA | |------------| | / \ | |------------| | DMA_SxM0AR |<=>| | SRAM | |<=>| []-[]...[] | | DMA_SxM1AR | | \_____/ | | | |____________| \___<|____/ |____________| STM32 DMA-MDMA chaining uses (struct dma_slave_config).peripheral_config to exchange the parameters needed to configure MDMA. These parameters are gathered into a u32 array with three values: * the STM32 MDMA request (which is actually the DMAMUX channel ID), * the address of the STM32 DMA register to clear the Transfer Complete interrupt flag, * the mask of the Transfer Complete interrupt flag of the STM32 DMA channel. h](h)}(h`STM32 DMA-MDMA chaining feature relies on the strengths of STM32 DMA and STM32 MDMA controllers.h]h`STM32 DMA-MDMA chaining feature relies on the strengths of STM32 DMA and STM32 MDMA controllers.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK.hjubh)}(hXSTM32 DMA has a circular Double Buffer Mode (DBM). At each end of transaction (when DMA data counter - DMA_SxNDTR - reaches 0), the memory pointers (configured with DMA_SxSM0AR and DMA_SxM1AR) are swapped and the DMA data counter is automatically reloaded. This allows the SW or the STM32 MDMA to process one memory area while the second memory area is being filled/used by the STM32 DMA transfer.h]hXSTM32 DMA has a circular Double Buffer Mode (DBM). At each end of transaction (when DMA data counter - DMA_SxNDTR - reaches 0), the memory pointers (configured with DMA_SxSM0AR and DMA_SxM1AR) are swapped and the DMA data counter is automatically reloaded. This allows the SW or the STM32 MDMA to process one memory area while the second memory area is being filled/used by the STM32 DMA transfer.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK1hjubh)}(hXtWith STM32 MDMA linked-list mode, a single request initiates the data array (collection of nodes) to be transferred until the linked-list pointer for the channel is null. The channel transfer complete of the last node is the end of transfer, unless first and last nodes are linked to each other, in such a case, the linked-list loops on to create a circular MDMA transfer.h]hXtWith STM32 MDMA linked-list mode, a single request initiates the data array (collection of nodes) to be transferred until the linked-list pointer for the channel is null. The channel transfer complete of the last node is the end of transfer, unless first and last nodes are linked to each other, in such a case, the linked-list loops on to create a circular MDMA transfer.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK8hjubh)}(hXSTM32 MDMA has direct connections with STM32 DMA. This enables autonomous communication and synchronization between peripherals, thus saving CPU resources and bus congestion. Transfer Complete signal of STM32 DMA channel can triggers STM32 MDMA transfer. STM32 MDMA can clear the request generated by the STM32 DMA by writing to its Interrupt Clear register (whose address is stored in MDMA_CxMAR, and bit mask in MDMA_CxMDR).h]hXSTM32 MDMA has direct connections with STM32 DMA. This enables autonomous communication and synchronization between peripherals, thus saving CPU resources and bus congestion. Transfer Complete signal of STM32 DMA channel can triggers STM32 MDMA transfer. STM32 MDMA can clear the request generated by the STM32 DMA by writing to its Interrupt Clear register (whose address is stored in MDMA_CxMAR, and bit mask in MDMA_CxMDR).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK>hjubhtable)}(hhh](h)}(h,STM32 MDMA interconnect table with STM32 DMAh]h,STM32 MDMA interconnect table with STM32 DMA}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKEhjubhtgroup)}(hhh](hcolspec)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhjubhthead)}(hhh]hrow)}(hhh](hentry)}(hhh]h)}(hSTM32 DMAMUX channelsh]hSTM32 DMAMUX channels}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKHhj&ubah}(h]h ]h"]h$]h&]uh1j$hj!ubj%)}(hhh]h)}(hSTM32 DMA channelsh]hSTM32 DMA channels}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKHhj=ubah}(h]h ]h"]h$]h&]uh1j$hj!ubj%)}(hhh]h)}(h"STM32 DMA Transfer complete signalh]h"STM32 DMA Transfer complete signal}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKHhjTubah}(h]h ]h"]h$]h&]uh1j$hj!ubj%)}(hhh]h)}(hSTM32 MDMA requesth]hSTM32 MDMA request}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKHhjkubah}(h]h ]h"]h$]h&]uh1j$hj!ubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhjubhtbody)}(hhh](j )}(hhh](j%)}(hhh]h)}(h Channel *0*h](hChannel }(hjhhhNhNubhemphasis)}(h*0*h]h0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKMhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA1 channel 0h]hDMA1 channel 0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKMhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma1_tcf0h]h dma1_tcf0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKMhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x00*h]j)}(hjh]h0x00}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhKMhjubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *1*h](hChannel }(hjhhhNhNubj)}(h*1*h]h1}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKOhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA1 channel 1h]hDMA1 channel 1}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKOhj9ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma1_tcf1h]h dma1_tcf1}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKOhjPubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x01*h]j)}(hjlh]h0x01}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hhhhKOhjgubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *2*h](hChannel }(hjhhhNhNubj)}(h*2*h]h2}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKQhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA1 channel 2h]hDMA1 channel 2}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKQhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma1_tcf2h]h dma1_tcf2}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKQhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x02*h]j)}(hjh]h0x02}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhKQhjubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *3*h](hChannel }(hjhhhNhNubj)}(h*3*h]h3}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKShj ubah}(h]h ]h"]h$]h&]uh1j$hj ubj%)}(hhh]h)}(hDMA1 channel 3h]hDMA1 channel 3}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKShj1ubah}(h]h ]h"]h$]h&]uh1j$hj ubj%)}(hhh]h)}(h dma1_tcf3h]h dma1_tcf3}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKShjHubah}(h]h ]h"]h$]h&]uh1j$hj ubj%)}(hhh]h)}(h*0x03*h]j)}(hjdh]h0x03}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubah}(h]h ]h"]h$]h&]uh1hhhhKShj_ubah}(h]h ]h"]h$]h&]uh1j$hj ubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *4*h](hChannel }(hjhhhNhNubj)}(h*4*h]h4}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKUhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA1 channel 4h]hDMA1 channel 4}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKUhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma1_tcf4h]h dma1_tcf4}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKUhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x04*h]j)}(hjh]h0x04}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhKUhjubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *5*h](hChannel }(hjhhhNhNubj)}(h*5*h]h5}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKWhjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA1 channel 5h]hDMA1 channel 5}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKWhj)ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma1_tcf5h]h dma1_tcf5}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKWhj@ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x05*h]j)}(hj\h]h0x05}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjZubah}(h]h ]h"]h$]h&]uh1hhhhKWhjWubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *6*h](hChannel }(hjhhhNhNubj)}(h*6*h]h6}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKYhjubah}(h]h ]h"]h$]h&]uh1j$hj}ubj%)}(hhh]h)}(hDMA1 channel 6h]hDMA1 channel 6}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKYhjubah}(h]h ]h"]h$]h&]uh1j$hj}ubj%)}(hhh]h)}(h dma1_tcf6h]h dma1_tcf6}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKYhjubah}(h]h ]h"]h$]h&]uh1j$hj}ubj%)}(hhh]h)}(h*0x06*h]j)}(hjh]h0x06}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhKYhjubah}(h]h ]h"]h$]h&]uh1j$hj}ubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *7*h](hChannel }(hjhhhNhNubj)}(h*7*h]h7}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhK[hjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA1 channel 7h]hDMA1 channel 7}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK[hj!ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma1_tcf7h]h dma1_tcf7}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK[hj8ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x07*h]j)}(hjTh]h0x07}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&]uh1hhhhK[hjOubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *8*h](hChannel }(hj{hhhNhNubj)}(h*8*h]h8}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj{ubeh}(h]h ]h"]h$]h&]uh1hhhhK]hjxubah}(h]h ]h"]h$]h&]uh1j$hjuubj%)}(hhh]h)}(hDMA2 channel 0h]hDMA2 channel 0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK]hjubah}(h]h ]h"]h$]h&]uh1j$hjuubj%)}(hhh]h)}(h dma2_tcf0h]h dma2_tcf0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK]hjubah}(h]h ]h"]h$]h&]uh1j$hjuubj%)}(hhh]h)}(h*0x08*h]j)}(hjh]h0x08}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhK]hjubah}(h]h ]h"]h$]h&]uh1j$hjuubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *9*h](hChannel }(hjhhhNhNubj)}(h*9*h]h9}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhK_hjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA2 channel 1h]hDMA2 channel 1}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK_hjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma2_tcf1h]h dma2_tcf1}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK_hj0ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x09*h]j)}(hjLh]h0x09}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1hhhhK_hjGubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *10*h](hChannel }(hjshhhNhNubj)}(h*10*h]h10}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjsubeh}(h]h ]h"]h$]h&]uh1hhhhKahjpubah}(h]h ]h"]h$]h&]uh1j$hjmubj%)}(hhh]h)}(hDMA2 channel 2h]hDMA2 channel 2}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKahjubah}(h]h ]h"]h$]h&]uh1j$hjmubj%)}(hhh]h)}(h dma2_tcf2h]h dma2_tcf2}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKahjubah}(h]h ]h"]h$]h&]uh1j$hjmubj%)}(hhh]h)}(h*0x0A*h]j)}(hjh]h0x0A}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhKahjubah}(h]h ]h"]h$]h&]uh1j$hjmubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *11*h](hChannel }(hjhhhNhNubj)}(h*11*h]h11}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKchjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA2 channel 3h]hDMA2 channel 3}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKchjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma2_tcf3h]h dma2_tcf3}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKchj(ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x0B*h]j)}(hjDh]h0x0B}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubah}(h]h ]h"]h$]h&]uh1hhhhKchj?ubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *12*h](hChannel }(hjkhhhNhNubj)}(h*12*h]h12}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjkubeh}(h]h ]h"]h$]h&]uh1hhhhKehjhubah}(h]h ]h"]h$]h&]uh1j$hjeubj%)}(hhh]h)}(hDMA2 channel 4h]hDMA2 channel 4}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKehjubah}(h]h ]h"]h$]h&]uh1j$hjeubj%)}(hhh]h)}(h dma2_tcf4h]h dma2_tcf4}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKehjubah}(h]h ]h"]h$]h&]uh1j$hjeubj%)}(hhh]h)}(h*0x0C*h]j)}(hjh]h0x0C}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhKehjubah}(h]h ]h"]h$]h&]uh1j$hjeubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *13*h](hChannel }(hjhhhNhNubj)}(h*13*h]h13}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1hhhhKghjubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(hDMA2 channel 5h]hDMA2 channel 5}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKghj ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h dma2_tcf5h]h dma2_tcf5}(hj# hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKghj ubah}(h]h ]h"]h$]h&]uh1j$hjubj%)}(hhh]h)}(h*0x0D*h]j)}(hj< h]h0x0D}(hj> hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj: ubah}(h]h ]h"]h$]h&]uh1hhhhKghj7 ubah}(h]h ]h"]h$]h&]uh1j$hjubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *14*h](hChannel }(hjc hhhNhNubj)}(h*14*h]h14}(hjk hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjc ubeh}(h]h ]h"]h$]h&]uh1hhhhKihj` ubah}(h]h ]h"]h$]h&]uh1j$hj] ubj%)}(hhh]h)}(hDMA2 channel 6h]hDMA2 channel 6}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKihj ubah}(h]h ]h"]h$]h&]uh1j$hj] ubj%)}(hhh]h)}(h dma2_tcf6h]h dma2_tcf6}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKihj ubah}(h]h ]h"]h$]h&]uh1j$hj] ubj%)}(hhh]h)}(h*0x0E*h]j)}(hj h]h0x0E}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhhhKihj ubah}(h]h ]h"]h$]h&]uh1j$hj] ubeh}(h]h ]h"]h$]h&]uh1jhjubj )}(hhh](j%)}(hhh]h)}(h Channel *15*h](hChannel }(hj hhhNhNubj)}(h*15*h]h15}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1hhhhKkhj ubah}(h]h ]h"]h$]h&]uh1j$hj ubj%)}(hhh]h)}(hDMA2 channel 7h]hDMA2 channel 7}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKkhj ubah}(h]h ]h"]h$]h&]uh1j$hj ubj%)}(hhh]h)}(h dma2_tcf7h]h dma2_tcf7}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKkhj ubah}(h]h ]h"]h$]h&]uh1j$hj ubj%)}(hhh]h)}(h*0x0F*h]j)}(hj4 h]h0x0F}(hj6 hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2 ubah}(h]h ]h"]h$]h&]uh1hhhhKkhj/ ubah}(h]h ]h"]h$]h&]uh1j$hj ubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]colsKuh1jhjubeh}(h]id1ah ]h"]h$]h&]uh1jhjubh)}(hXSTM32 DMA-MDMA chaining feature then uses a SRAM buffer. STM32MP1 SoCs embed three fast access static internal RAMs of various size, used for data storage. Due to STM32 DMA legacy (within microcontrollers), STM32 DMA performances are bad with DDR, while they are optimal with SRAM. Hence the SRAM buffer used between STM32 DMA and STM32 MDMA. This buffer is split in two equal periods and STM32 DMA uses one period while STM32 MDMA uses the other period simultaneously. ::h]hXSTM32 DMA-MDMA chaining feature then uses a SRAM buffer. STM32MP1 SoCs embed three fast access static internal RAMs of various size, used for data storage. Due to STM32 DMA legacy (within microcontrollers), STM32 DMA performances are bad with DDR, while they are optimal with SRAM. Hence the SRAM buffer used between STM32 DMA and STM32 MDMA. This buffer is split in two equal periods and STM32 DMA uses one period while STM32 MDMA uses the other period simultaneously.}(hji hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKnhjubh literal_block)}(hXb dma[1:2]-tcf[0:7] .----------------. ____________ ' _________ V____________ | STM32 DMA | / __|>_ \ | STM32 MDMA | |------------| | / \ | |------------| | DMA_SxM0AR |<=>| | SRAM | |<=>| []-[]...[] | | DMA_SxM1AR | | \_____/ | | | |____________| \___<|____/ |____________|h]hXb dma[1:2]-tcf[0:7] .----------------. ____________ ' _________ V____________ | STM32 DMA | / __|>_ \ | STM32 MDMA | |------------| | / \ | |------------| | DMA_SxM0AR |<=>| | SRAM | |<=>| []-[]...[] | | DMA_SxM1AR | | \_____/ | | | |____________| \___<|____/ |____________|}hjy sbah}(h]h ]h"]h$]h&]hhuh1jw hhhKwhjubh)}(hSTM32 DMA-MDMA chaining uses (struct dma_slave_config).peripheral_config to exchange the parameters needed to configure MDMA. These parameters are gathered into a u32 array with three values:h]hSTM32 DMA-MDMA chaining uses (struct dma_slave_config).peripheral_config to exchange the parameters needed to configure MDMA. These parameters are gathered into a u32 array with three values:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh bullet_list)}(hhh](h list_item)}(hAthe STM32 MDMA request (which is actually the DMAMUX channel ID),h]h)}(hj h]hAthe STM32 MDMA request (which is actually the DMAMUX channel ID),}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hTthe address of the STM32 DMA register to clear the Transfer Complete interrupt flag,h]h)}(hTthe address of the STM32 DMA register to clear the Transfer Complete interrupt flag,h]hTthe address of the STM32 DMA register to clear the Transfer Complete interrupt flag,}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(hKthe mask of the Transfer Complete interrupt flag of the STM32 DMA channel. h]h)}(hJthe mask of the Transfer Complete interrupt flag of the STM32 DMA channel.h]hJthe mask of the Transfer Complete interrupt flag of the STM32 DMA channel.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1j hj ubeh}(h]h ]h"]h$]h&]bullet*uh1j hhhKhjubeh}(h]h ]h"]h$]h&]uh1hhhhK.hjhhubeh}(h] principlesah ]h"] principlesah$]h&]uh1hhhhhhhhK,ubh)}(hhh](h)}(h7Device Tree updates for STM32 DMA-MDMA chaining supporth]h7Device Tree updates for STM32 DMA-MDMA chaining support}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhKubh)}(hX[ **1. Allocate a SRAM buffer** SRAM device tree node is defined in SoC device tree. You can refer to it in your board device tree to define your SRAM pool. :: &sram { my_foo_device_dma_pool: dma-sram@0 { reg = <0x0 0x1000>; }; }; Be careful of the start index, in case there are other SRAM consumers. Define your pool size strategically: to optimise chaining, the idea is that STM32 DMA and STM32 MDMA can work simultaneously, on each buffer of the SRAM. If the SRAM period is greater than the expected DMA transfer, then STM32 DMA and STM32 MDMA will work sequentially instead of simultaneously. It is not a functional issue but it is not optimal. Don't forget to refer to your SRAM pool in your device node. You need to define a new property. :: &my_foo_device { ... my_dma_pool = &my_foo_device_dma_pool; }; Then get this SRAM pool in your foo driver and allocate your SRAM buffer. **2. Allocate a STM32 DMA channel and a STM32 MDMA channel** You need to define an extra channel in your device tree node, in addition to the one you should already have for "classic" DMA operation. This new channel must be taken from STM32 MDMA channels, so, the phandle of the DMA controller to use is the MDMA controller's one. :: &my_foo_device { [...] my_dma_pool = &my_foo_device_dma_pool; dmas = <&dmamux1 ...>, // STM32 DMA channel <&mdma1 0 0x3 0x1200000a 0 0>; // + STM32 MDMA channel }; Concerning STM32 MDMA bindings: 1. The request line number : whatever the value here, it will be overwritten by MDMA driver with the STM32 DMAMUX channel ID passed through (struct dma_slave_config).peripheral_config 2. The priority level : choose Very High (0x3) so that your channel will take priority other the other during request arbitration 3. A 32bit mask specifying the DMA channel configuration : source and destination address increment, block transfer with 128 bytes per single transfer 4. The 32bit value specifying the register to be used to acknowledge the request: it will be overwritten by MDMA driver, with the DMA channel interrupt flag clear register address passed through (struct dma_slave_config).peripheral_config 5. The 32bit mask specifying the value to be written to acknowledge the request: it will be overwritten by MDMA driver, with the DMA channel Transfer Complete flag passed through (struct dma_slave_config).peripheral_config h](h)}(h**1. Allocate a SRAM buffer**h]j)}(hj h]h1. Allocate a SRAM buffer}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(hXSRAM device tree node is defined in SoC device tree. You can refer to it in your board device tree to define your SRAM pool. :: &sram { my_foo_device_dma_pool: dma-sram@0 { reg = <0x0 0x1000>; }; }; Be careful of the start index, in case there are other SRAM consumers. Define your pool size strategically: to optimise chaining, the idea is that STM32 DMA and STM32 MDMA can work simultaneously, on each buffer of the SRAM. If the SRAM period is greater than the expected DMA transfer, then STM32 DMA and STM32 MDMA will work sequentially instead of simultaneously. It is not a functional issue but it is not optimal. Don't forget to refer to your SRAM pool in your device node. You need to define a new property. :: &my_foo_device { ... my_dma_pool = &my_foo_device_dma_pool; }; Then get this SRAM pool in your foo driver and allocate your SRAM buffer. h](h)}(hSRAM device tree node is defined in SoC device tree. You can refer to it in your board device tree to define your SRAM pool. ::h]h|SRAM device tree node is defined in SoC device tree. You can refer to it in your board device tree to define your SRAM pool.}(hj) hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj% ubjx )}(hf&sram { my_foo_device_dma_pool: dma-sram@0 { reg = <0x0 0x1000>; }; };h]hf&sram { my_foo_device_dma_pool: dma-sram@0 { reg = <0x0 0x1000>; }; };}hj7 sbah}(h]h ]h"]h$]h&]hhuh1jw hhhKhj% ubh)}(hXBe careful of the start index, in case there are other SRAM consumers. Define your pool size strategically: to optimise chaining, the idea is that STM32 DMA and STM32 MDMA can work simultaneously, on each buffer of the SRAM. If the SRAM period is greater than the expected DMA transfer, then STM32 DMA and STM32 MDMA will work sequentially instead of simultaneously. It is not a functional issue but it is not optimal.h]hXBe careful of the start index, in case there are other SRAM consumers. Define your pool size strategically: to optimise chaining, the idea is that STM32 DMA and STM32 MDMA can work simultaneously, on each buffer of the SRAM. If the SRAM period is greater than the expected DMA transfer, then STM32 DMA and STM32 MDMA will work sequentially instead of simultaneously. It is not a functional issue but it is not optimal.}(hjE hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj% ubh)}(hbDon't forget to refer to your SRAM pool in your device node. You need to define a new property. ::h]haDon’t forget to refer to your SRAM pool in your device node. You need to define a new property.}(hjS hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj% ubjx )}(hN&my_foo_device { ... my_dma_pool = &my_foo_device_dma_pool; };h]hN&my_foo_device { ... my_dma_pool = &my_foo_device_dma_pool; };}hja sbah}(h]h ]h"]h$]h&]hhuh1jw hhhKhj% ubh)}(hIThen get this SRAM pool in your foo driver and allocate your SRAM buffer.h]hIThen get this SRAM pool in your foo driver and allocate your SRAM buffer.}(hjo hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj% ubeh}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(h<**2. Allocate a STM32 DMA channel and a STM32 MDMA channel**h]j)}(hj h]h82. Allocate a STM32 DMA channel and a STM32 MDMA channel}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(hXYou need to define an extra channel in your device tree node, in addition to the one you should already have for "classic" DMA operation. This new channel must be taken from STM32 MDMA channels, so, the phandle of the DMA controller to use is the MDMA controller's one. :: &my_foo_device { [...] my_dma_pool = &my_foo_device_dma_pool; dmas = <&dmamux1 ...>, // STM32 DMA channel <&mdma1 0 0x3 0x1200000a 0 0>; // + STM32 MDMA channel }; Concerning STM32 MDMA bindings: 1. The request line number : whatever the value here, it will be overwritten by MDMA driver with the STM32 DMAMUX channel ID passed through (struct dma_slave_config).peripheral_config 2. The priority level : choose Very High (0x3) so that your channel will take priority other the other during request arbitration 3. A 32bit mask specifying the DMA channel configuration : source and destination address increment, block transfer with 128 bytes per single transfer 4. The 32bit value specifying the register to be used to acknowledge the request: it will be overwritten by MDMA driver, with the DMA channel interrupt flag clear register address passed through (struct dma_slave_config).peripheral_config 5. The 32bit mask specifying the value to be written to acknowledge the request: it will be overwritten by MDMA driver, with the DMA channel Transfer Complete flag passed through (struct dma_slave_config).peripheral_config h](h)}(hYou need to define an extra channel in your device tree node, in addition to the one you should already have for "classic" DMA operation.h]hYou need to define an extra channel in your device tree node, in addition to the one you should already have for “classic” DMA operation.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(hThis new channel must be taken from STM32 MDMA channels, so, the phandle of the DMA controller to use is the MDMA controller's one. ::h]hThis new channel must be taken from STM32 MDMA channels, so, the phandle of the DMA controller to use is the MDMA controller’s one.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubjx )}(h&my_foo_device { [...] my_dma_pool = &my_foo_device_dma_pool; dmas = <&dmamux1 ...>, // STM32 DMA channel <&mdma1 0 0x3 0x1200000a 0 0>; // + STM32 MDMA channel };h]h&my_foo_device { [...] my_dma_pool = &my_foo_device_dma_pool; dmas = <&dmamux1 ...>, // STM32 DMA channel <&mdma1 0 0x3 0x1200000a 0 0>; // + STM32 MDMA channel };}hj sbah}(h]h ]h"]h$]h&]hhuh1jw hhhKhj ubh)}(hConcerning STM32 MDMA bindings:h]hConcerning STM32 MDMA bindings:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(h1. The request line number : whatever the value here, it will be overwritten by MDMA driver with the STM32 DMAMUX channel ID passed through (struct dma_slave_config).peripheral_configh]h1. The request line number : whatever the value here, it will be overwritten by MDMA driver with the STM32 DMAMUX channel ID passed through (struct dma_slave_config).peripheral_config}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(h2. The priority level : choose Very High (0x3) so that your channel will take priority other the other during request arbitrationh]h2. The priority level : choose Very High (0x3) so that your channel will take priority other the other during request arbitration}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(h3. A 32bit mask specifying the DMA channel configuration : source and destination address increment, block transfer with 128 bytes per single transferh]h3. A 32bit mask specifying the DMA channel configuration : source and destination address increment, block transfer with 128 bytes per single transfer}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(h4. The 32bit value specifying the register to be used to acknowledge the request: it will be overwritten by MDMA driver, with the DMA channel interrupt flag clear register address passed through (struct dma_slave_config).peripheral_configh]h4. The 32bit value specifying the register to be used to acknowledge the request: it will be overwritten by MDMA driver, with the DMA channel interrupt flag clear register address passed through (struct dma_slave_config).peripheral_config}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubh)}(h5. The 32bit mask specifying the value to be written to acknowledge the request: it will be overwritten by MDMA driver, with the DMA channel Transfer Complete flag passed through (struct dma_slave_config).peripheral_configh]h5. The 32bit mask specifying the value to be written to acknowledge the request: it will be overwritten by MDMA driver, with the DMA channel Transfer Complete flag passed through (struct dma_slave_config).peripheral_config}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubeh}(h]h ]h"]h$]h&]uh1hhhhKhj ubeh}(h]h ]h"]h$]h&]uh1hhhhKhj hhubeh}(h]7device-tree-updates-for-stm32-dma-mdma-chaining-supportah ]h"]7device tree updates for stm32 dma-mdma chaining supportah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h@Driver updates for STM32 DMA-MDMA chaining support in foo driverh]h@Driver updates for STM32 DMA-MDMA chaining support in foo driver}(hj3 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0 hhhhhKubh)}(hX**0. (optional) Refactor the original sg_table if dmaengine_prep_slave_sg()** In case of dmaengine_prep_slave_sg(), the original sg_table can't be used as is. Two new sg_tables must be created from the original one. One for STM32 DMA transfer (where memory address targets now the SRAM buffer instead of DDR buffer) and one for STM32 MDMA transfer (where memory address targets the DDR buffer). The new sg_list items must fit SRAM period length. Here is an example for DMA_DEV_TO_MEM: :: /* * Assuming sgl and nents, respectively the initial scatterlist and its * length. * Assuming sram_dma_buf and sram_period, respectively the memory * allocated from the pool for DMA usage, and the length of the period, * which is half of the sram_buf size. */ struct sg_table new_dma_sgt, new_mdma_sgt; struct scatterlist *s, *_sgl; dma_addr_t ddr_dma_buf; u32 new_nents = 0, len; int i; /* Count the number of entries needed */ for_each_sg(sgl, s, nents, i) if (sg_dma_len(s) > sram_period) new_nents += DIV_ROUND_UP(sg_dma_len(s), sram_period); else new_nents++; /* Create sg table for STM32 DMA channel */ ret = sg_alloc_table(&new_dma_sgt, new_nents, GFP_ATOMIC); if (ret) dev_err(dev, "DMA sg table alloc failed\n"); for_each_sg(new_dma_sgt.sgl, s, new_dma_sgt.nents, i) { _sgl = sgl; sg_dma_len(s) = min(sg_dma_len(_sgl), sram_period); /* Targets the beginning = first half of the sram_buf */ s->dma_address = sram_buf; /* * Targets the second half of the sram_buf * for odd indexes of the item of the sg_list */ if (i & 1) s->dma_address += sram_period; } /* Create sg table for STM32 MDMA channel */ ret = sg_alloc_table(&new_mdma_sgt, new_nents, GFP_ATOMIC); if (ret) dev_err(dev, "MDMA sg_table alloc failed\n"); _sgl = sgl; len = sg_dma_len(sgl); ddr_dma_buf = sg_dma_address(sgl); for_each_sg(mdma_sgt.sgl, s, mdma_sgt.nents, i) { size_t bytes = min_t(size_t, len, sram_period); sg_dma_len(s) = bytes; sg_dma_address(s) = ddr_dma_buf; len -= bytes; if (!len && sg_next(_sgl)) { _sgl = sg_next(_sgl); len = sg_dma_len(_sgl); ddr_dma_buf = sg_dma_address(_sgl); } else { ddr_dma_buf += bytes; } } Don't forget to release these new sg_tables after getting the descriptors with dmaengine_prep_slave_sg(). **1. Set controller specific parameters** First, use dmaengine_slave_config() with a struct dma_slave_config to configure STM32 DMA channel. You just have to take care of DMA addresses, the memory address (depending on the transfer direction) must point on your SRAM buffer, and set (struct dma_slave_config).peripheral_size != 0. STM32 DMA driver will check (struct dma_slave_config).peripheral_size to determine if chaining is being used or not. If it is used, then STM32 DMA driver fills (struct dma_slave_config).peripheral_config with an array of three u32 : the first one containing STM32 DMAMUX channel ID, the second one the channel interrupt flag clear register address, and the third one the channel Transfer Complete flag mask. Then, use dmaengine_slave_config with another struct dma_slave_config to configure STM32 MDMA channel. Take care of DMA addresses, the device address (depending on the transfer direction) must point on your SRAM buffer, and the memory address must point to the buffer originally used for "classic" DMA operation. Use the previous (struct dma_slave_config).peripheral_size and .peripheral_config that have been updated by STM32 DMA driver, to set (struct dma_slave_config).peripheral_size and .peripheral_config of the struct dma_slave_config to configure STM32 MDMA channel. :: struct dma_slave_config dma_conf; struct dma_slave_config mdma_conf; memset(&dma_conf, 0, sizeof(dma_conf)); [...] config.direction = DMA_DEV_TO_MEM; config.dst_addr = sram_dma_buf; // SRAM buffer config.peripheral_size = 1; // peripheral_size != 0 => chaining dmaengine_slave_config(dma_chan, &dma_config); memset(&mdma_conf, 0, sizeof(mdma_conf)); config.direction = DMA_DEV_TO_MEM; mdma_conf.src_addr = sram_dma_buf; // SRAM buffer mdma_conf.dst_addr = rx_dma_buf; // original memory buffer mdma_conf.peripheral_size = dma_conf.peripheral_size; // <- dma_conf mdma_conf.peripheral_config = dma_config.peripheral_config; // <- dma_conf dmaengine_slave_config(mdma_chan, &mdma_conf); **2. Get a descriptor for STM32 DMA channel transaction** In the same way you get your descriptor for your "classic" DMA operation, you just have to replace the original sg_list (in case of dmaengine_prep_slave_sg()) with the new sg_list using SRAM buffer, or to replace the original buffer address, length and period (in case of dmaengine_prep_dma_cyclic()) with the new SRAM buffer. **3. Get a descriptor for STM32 MDMA channel transaction** If you previously get descriptor (for STM32 DMA) with * dmaengine_prep_slave_sg(), then use dmaengine_prep_slave_sg() for STM32 MDMA; * dmaengine_prep_dma_cyclic(), then use dmaengine_prep_dma_cyclic() for STM32 MDMA. Use the new sg_list using SRAM buffer (in case of dmaengine_prep_slave_sg()) or, depending on the transfer direction, either the original DDR buffer (in case of DMA_DEV_TO_MEM) or the SRAM buffer (in case of DMA_MEM_TO_DEV), the source address being previously set with dmaengine_slave_config(). **4. Submit both transactions** Before submitting your transactions, you may need to define on which descriptor you want a callback to be called at the end of the transfer (dmaengine_prep_slave_sg()) or the period (dmaengine_prep_dma_cyclic()). Depending on the direction, set the callback on the descriptor that finishes the overall transfer: * DMA_DEV_TO_MEM: set the callback on the "MDMA" descriptor * DMA_MEM_TO_DEV: set the callback on the "DMA" descriptor Then, submit the descriptors whatever the order, with dmaengine_tx_submit(). **5. Issue pending requests (and wait for callback notification)** As STM32 MDMA channel transfer is triggered by STM32 DMA, you must issue STM32 MDMA channel before STM32 DMA channel. If any, your callback will be called to warn you about the end of the overall transfer or the period completion. Don't forget to terminate both channels. STM32 DMA channel is configured in cyclic Double-Buffer mode so it won't be disabled by HW, you need to terminate it. STM32 MDMA channel will be stopped by HW in case of sg transfer, but not in case of cyclic transfer. You can terminate it whatever the kind of transfer. **STM32 DMA-MDMA chaining DMA_MEM_TO_DEV special case** STM32 DMA-MDMA chaining in DMA_MEM_TO_DEV is a special case. Indeed, the STM32 MDMA feeds the SRAM buffer with the DDR data, and the STM32 DMA reads data from SRAM buffer. So some data (the first period) have to be copied in SRAM buffer when the STM32 DMA starts to read. A trick could be pausing the STM32 DMA channel (that will raise a Transfer Complete signal, triggering the STM32 MDMA channel), but the first data read by the STM32 DMA could be "wrong". The proper way is to prepare the first SRAM period with dmaengine_prep_dma_memcpy(). Then this first period should be "removed" from the sg or the cyclic transfer. Due to this complexity, rather use the STM32 DMA-MDMA chaining for DMA_DEV_TO_MEM and keep the "classic" DMA usage for DMA_MEM_TO_DEV, unless you're not afraid. h](h)}(hM**0. (optional) Refactor the original sg_table if dmaengine_prep_slave_sg()**h]j)}(hjG h]hI0. (optional) Refactor the original sg_table if dmaengine_prep_slave_sg()}(hjI hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjE ubah}(h]h ]h"]h$]h&]uh1hhhhKhjA ubh)}(hX In case of dmaengine_prep_slave_sg(), the original sg_table can't be used as is. Two new sg_tables must be created from the original one. One for STM32 DMA transfer (where memory address targets now the SRAM buffer instead of DDR buffer) and one for STM32 MDMA transfer (where memory address targets the DDR buffer). The new sg_list items must fit SRAM period length. Here is an example for DMA_DEV_TO_MEM: :: /* * Assuming sgl and nents, respectively the initial scatterlist and its * length. * Assuming sram_dma_buf and sram_period, respectively the memory * allocated from the pool for DMA usage, and the length of the period, * which is half of the sram_buf size. */ struct sg_table new_dma_sgt, new_mdma_sgt; struct scatterlist *s, *_sgl; dma_addr_t ddr_dma_buf; u32 new_nents = 0, len; int i; /* Count the number of entries needed */ for_each_sg(sgl, s, nents, i) if (sg_dma_len(s) > sram_period) new_nents += DIV_ROUND_UP(sg_dma_len(s), sram_period); else new_nents++; /* Create sg table for STM32 DMA channel */ ret = sg_alloc_table(&new_dma_sgt, new_nents, GFP_ATOMIC); if (ret) dev_err(dev, "DMA sg table alloc failed\n"); for_each_sg(new_dma_sgt.sgl, s, new_dma_sgt.nents, i) { _sgl = sgl; sg_dma_len(s) = min(sg_dma_len(_sgl), sram_period); /* Targets the beginning = first half of the sram_buf */ s->dma_address = sram_buf; /* * Targets the second half of the sram_buf * for odd indexes of the item of the sg_list */ if (i & 1) s->dma_address += sram_period; } /* Create sg table for STM32 MDMA channel */ ret = sg_alloc_table(&new_mdma_sgt, new_nents, GFP_ATOMIC); if (ret) dev_err(dev, "MDMA sg_table alloc failed\n"); _sgl = sgl; len = sg_dma_len(sgl); ddr_dma_buf = sg_dma_address(sgl); for_each_sg(mdma_sgt.sgl, s, mdma_sgt.nents, i) { size_t bytes = min_t(size_t, len, sram_period); sg_dma_len(s) = bytes; sg_dma_address(s) = ddr_dma_buf; len -= bytes; if (!len && sg_next(_sgl)) { _sgl = sg_next(_sgl); len = sg_dma_len(_sgl); ddr_dma_buf = sg_dma_address(_sgl); } else { ddr_dma_buf += bytes; } } Don't forget to release these new sg_tables after getting the descriptors with dmaengine_prep_slave_sg(). h](h)}(hX<In case of dmaengine_prep_slave_sg(), the original sg_table can't be used as is. Two new sg_tables must be created from the original one. One for STM32 DMA transfer (where memory address targets now the SRAM buffer instead of DDR buffer) and one for STM32 MDMA transfer (where memory address targets the DDR buffer).h]hX>In case of dmaengine_prep_slave_sg(), the original sg_table can’t be used as is. Two new sg_tables must be created from the original one. One for STM32 DMA transfer (where memory address targets now the SRAM buffer instead of DDR buffer) and one for STM32 MDMA transfer (where memory address targets the DDR buffer).}(hj` hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj\ ubh)}(h\The new sg_list items must fit SRAM period length. Here is an example for DMA_DEV_TO_MEM: ::h]hYThe new sg_list items must fit SRAM period length. Here is an example for DMA_DEV_TO_MEM:}(hjn hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj\ ubjx )}(hXp/* * Assuming sgl and nents, respectively the initial scatterlist and its * length. * Assuming sram_dma_buf and sram_period, respectively the memory * allocated from the pool for DMA usage, and the length of the period, * which is half of the sram_buf size. */ struct sg_table new_dma_sgt, new_mdma_sgt; struct scatterlist *s, *_sgl; dma_addr_t ddr_dma_buf; u32 new_nents = 0, len; int i; /* Count the number of entries needed */ for_each_sg(sgl, s, nents, i) if (sg_dma_len(s) > sram_period) new_nents += DIV_ROUND_UP(sg_dma_len(s), sram_period); else new_nents++; /* Create sg table for STM32 DMA channel */ ret = sg_alloc_table(&new_dma_sgt, new_nents, GFP_ATOMIC); if (ret) dev_err(dev, "DMA sg table alloc failed\n"); for_each_sg(new_dma_sgt.sgl, s, new_dma_sgt.nents, i) { _sgl = sgl; sg_dma_len(s) = min(sg_dma_len(_sgl), sram_period); /* Targets the beginning = first half of the sram_buf */ s->dma_address = sram_buf; /* * Targets the second half of the sram_buf * for odd indexes of the item of the sg_list */ if (i & 1) s->dma_address += sram_period; } /* Create sg table for STM32 MDMA channel */ ret = sg_alloc_table(&new_mdma_sgt, new_nents, GFP_ATOMIC); if (ret) dev_err(dev, "MDMA sg_table alloc failed\n"); _sgl = sgl; len = sg_dma_len(sgl); ddr_dma_buf = sg_dma_address(sgl); for_each_sg(mdma_sgt.sgl, s, mdma_sgt.nents, i) { size_t bytes = min_t(size_t, len, sram_period); sg_dma_len(s) = bytes; sg_dma_address(s) = ddr_dma_buf; len -= bytes; if (!len && sg_next(_sgl)) { _sgl = sg_next(_sgl); len = sg_dma_len(_sgl); ddr_dma_buf = sg_dma_address(_sgl); } else { ddr_dma_buf += bytes; } }h]hXp/* * Assuming sgl and nents, respectively the initial scatterlist and its * length. * Assuming sram_dma_buf and sram_period, respectively the memory * allocated from the pool for DMA usage, and the length of the period, * which is half of the sram_buf size. */ struct sg_table new_dma_sgt, new_mdma_sgt; struct scatterlist *s, *_sgl; dma_addr_t ddr_dma_buf; u32 new_nents = 0, len; int i; /* Count the number of entries needed */ for_each_sg(sgl, s, nents, i) if (sg_dma_len(s) > sram_period) new_nents += DIV_ROUND_UP(sg_dma_len(s), sram_period); else new_nents++; /* Create sg table for STM32 DMA channel */ ret = sg_alloc_table(&new_dma_sgt, new_nents, GFP_ATOMIC); if (ret) dev_err(dev, "DMA sg table alloc failed\n"); for_each_sg(new_dma_sgt.sgl, s, new_dma_sgt.nents, i) { _sgl = sgl; sg_dma_len(s) = min(sg_dma_len(_sgl), sram_period); /* Targets the beginning = first half of the sram_buf */ s->dma_address = sram_buf; /* * Targets the second half of the sram_buf * for odd indexes of the item of the sg_list */ if (i & 1) s->dma_address += sram_period; } /* Create sg table for STM32 MDMA channel */ ret = sg_alloc_table(&new_mdma_sgt, new_nents, GFP_ATOMIC); if (ret) dev_err(dev, "MDMA sg_table alloc failed\n"); _sgl = sgl; len = sg_dma_len(sgl); ddr_dma_buf = sg_dma_address(sgl); for_each_sg(mdma_sgt.sgl, s, mdma_sgt.nents, i) { size_t bytes = min_t(size_t, len, sram_period); sg_dma_len(s) = bytes; sg_dma_address(s) = ddr_dma_buf; len -= bytes; if (!len && sg_next(_sgl)) { _sgl = sg_next(_sgl); len = sg_dma_len(_sgl); ddr_dma_buf = sg_dma_address(_sgl); } else { ddr_dma_buf += bytes; } }}hj| sbah}(h]h ]h"]h$]h&]hhuh1jw hhhKhj\ ubh)}(hiDon't forget to release these new sg_tables after getting the descriptors with dmaengine_prep_slave_sg().h]hkDon’t forget to release these new sg_tables after getting the descriptors with dmaengine_prep_slave_sg().}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj\ ubeh}(h]h ]h"]h$]h&]uh1hhhhKhjA ubh)}(h)**1. Set controller specific parameters**h]j)}(hj h]h%1. Set controller specific parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhhhM"hjA ubh)}(hXFirst, use dmaengine_slave_config() with a struct dma_slave_config to configure STM32 DMA channel. You just have to take care of DMA addresses, the memory address (depending on the transfer direction) must point on your SRAM buffer, and set (struct dma_slave_config).peripheral_size != 0. STM32 DMA driver will check (struct dma_slave_config).peripheral_size to determine if chaining is being used or not. If it is used, then STM32 DMA driver fills (struct dma_slave_config).peripheral_config with an array of three u32 : the first one containing STM32 DMAMUX channel ID, the second one the channel interrupt flag clear register address, and the third one the channel Transfer Complete flag mask. Then, use dmaengine_slave_config with another struct dma_slave_config to configure STM32 MDMA channel. Take care of DMA addresses, the device address (depending on the transfer direction) must point on your SRAM buffer, and the memory address must point to the buffer originally used for "classic" DMA operation. Use the previous (struct dma_slave_config).peripheral_size and .peripheral_config that have been updated by STM32 DMA driver, to set (struct dma_slave_config).peripheral_size and .peripheral_config of the struct dma_slave_config to configure STM32 MDMA channel. :: struct dma_slave_config dma_conf; struct dma_slave_config mdma_conf; memset(&dma_conf, 0, sizeof(dma_conf)); [...] config.direction = DMA_DEV_TO_MEM; config.dst_addr = sram_dma_buf; // SRAM buffer config.peripheral_size = 1; // peripheral_size != 0 => chaining dmaengine_slave_config(dma_chan, &dma_config); memset(&mdma_conf, 0, sizeof(mdma_conf)); config.direction = DMA_DEV_TO_MEM; mdma_conf.src_addr = sram_dma_buf; // SRAM buffer mdma_conf.dst_addr = rx_dma_buf; // original memory buffer mdma_conf.peripheral_size = dma_conf.peripheral_size; // <- dma_conf mdma_conf.peripheral_config = dma_config.peripheral_config; // <- dma_conf dmaengine_slave_config(mdma_chan, &mdma_conf); NTh](h)}(hX First, use dmaengine_slave_config() with a struct dma_slave_config to configure STM32 DMA channel. You just have to take care of DMA addresses, the memory address (depending on the transfer direction) must point on your SRAM buffer, and set (struct dma_slave_config).peripheral_size != 0.h]hX First, use dmaengine_slave_config() with a struct dma_slave_config to configure STM32 DMA channel. You just have to take care of DMA addresses, the memory address (depending on the transfer direction) must point on your SRAM buffer, and set (struct dma_slave_config).peripheral_size != 0.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM$hj ubh)}(hXSTM32 DMA driver will check (struct dma_slave_config).peripheral_size to determine if chaining is being used or not. If it is used, then STM32 DMA driver fills (struct dma_slave_config).peripheral_config with an array of three u32 : the first one containing STM32 DMAMUX channel ID, the second one the channel interrupt flag clear register address, and the third one the channel Transfer Complete flag mask.h]hXSTM32 DMA driver will check (struct dma_slave_config).peripheral_size to determine if chaining is being used or not. If it is used, then STM32 DMA driver fills (struct dma_slave_config).peripheral_config with an array of three u32 : the first one containing STM32 DMAMUX channel ID, the second one the channel interrupt flag clear register address, and the third one the channel Transfer Complete flag mask.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM)hj ubh)}(hXAThen, use dmaengine_slave_config with another struct dma_slave_config to configure STM32 MDMA channel. Take care of DMA addresses, the device address (depending on the transfer direction) must point on your SRAM buffer, and the memory address must point to the buffer originally used for "classic" DMA operation. Use the previous (struct dma_slave_config).peripheral_size and .peripheral_config that have been updated by STM32 DMA driver, to set (struct dma_slave_config).peripheral_size and .peripheral_config of the struct dma_slave_config to configure STM32 MDMA channel. ::h]hXBThen, use dmaengine_slave_config with another struct dma_slave_config to configure STM32 MDMA channel. Take care of DMA addresses, the device address (depending on the transfer direction) must point on your SRAM buffer, and the memory address must point to the buffer originally used for “classic” DMA operation. Use the previous (struct dma_slave_config).peripheral_size and .peripheral_config that have been updated by STM32 DMA driver, to set (struct dma_slave_config).peripheral_size and .peripheral_config of the struct dma_slave_config to configure STM32 MDMA channel.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM0hj ubjx )}(hXstruct dma_slave_config dma_conf; struct dma_slave_config mdma_conf; memset(&dma_conf, 0, sizeof(dma_conf)); [...] config.direction = DMA_DEV_TO_MEM; config.dst_addr = sram_dma_buf; // SRAM buffer config.peripheral_size = 1; // peripheral_size != 0 => chaining dmaengine_slave_config(dma_chan, &dma_config); memset(&mdma_conf, 0, sizeof(mdma_conf)); config.direction = DMA_DEV_TO_MEM; mdma_conf.src_addr = sram_dma_buf; // SRAM buffer mdma_conf.dst_addr = rx_dma_buf; // original memory buffer mdma_conf.peripheral_size = dma_conf.peripheral_size; // <- dma_conf mdma_conf.peripheral_config = dma_config.peripheral_config; // <- dma_conf dmaengine_slave_config(mdma_chan, &mdma_conf);h]hXstruct dma_slave_config dma_conf; struct dma_slave_config mdma_conf; memset(&dma_conf, 0, sizeof(dma_conf)); [...] config.direction = DMA_DEV_TO_MEM; config.dst_addr = sram_dma_buf; // SRAM buffer config.peripheral_size = 1; // peripheral_size != 0 => chaining dmaengine_slave_config(dma_chan, &dma_config); memset(&mdma_conf, 0, sizeof(mdma_conf)); config.direction = DMA_DEV_TO_MEM; mdma_conf.src_addr = sram_dma_buf; // SRAM buffer mdma_conf.dst_addr = rx_dma_buf; // original memory buffer mdma_conf.peripheral_size = dma_conf.peripheral_size; // <- dma_conf mdma_conf.peripheral_config = dma_config.peripheral_config; // <- dma_conf dmaengine_slave_config(mdma_chan, &mdma_conf);}hj sbah}(h]h ]h"]h$]h&]hhuh1jw hhhM:hj ubeh}(h]h ]h"]h$]h&]uh1hhhhM$hjA ubh)}(h9**2. Get a descriptor for STM32 DMA channel transaction**h]j)}(hj h]h52. Get a descriptor for STM32 DMA channel transaction}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhhhMNhjA ubh)}(hXGIn the same way you get your descriptor for your "classic" DMA operation, you just have to replace the original sg_list (in case of dmaengine_prep_slave_sg()) with the new sg_list using SRAM buffer, or to replace the original buffer address, length and period (in case of dmaengine_prep_dma_cyclic()) with the new SRAM buffer. h]h)}(hXFIn the same way you get your descriptor for your "classic" DMA operation, you just have to replace the original sg_list (in case of dmaengine_prep_slave_sg()) with the new sg_list using SRAM buffer, or to replace the original buffer address, length and period (in case of dmaengine_prep_dma_cyclic()) with the new SRAM buffer.h]hXJIn the same way you get your descriptor for your “classic” DMA operation, you just have to replace the original sg_list (in case of dmaengine_prep_slave_sg()) with the new sg_list using SRAM buffer, or to replace the original buffer address, length and period (in case of dmaengine_prep_dma_cyclic()) with the new SRAM buffer.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMPhj ubah}(h]h ]h"]h$]h&]uh1hhhhMPhjA ubh)}(h:**3. Get a descriptor for STM32 MDMA channel transaction**h]j)}(hj( h]h63. Get a descriptor for STM32 MDMA channel transaction}(hj* hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj& ubah}(h]h ]h"]h$]h&]uh1hhhhMVhjA ubh)}(hXIf you previously get descriptor (for STM32 DMA) with * dmaengine_prep_slave_sg(), then use dmaengine_prep_slave_sg() for STM32 MDMA; * dmaengine_prep_dma_cyclic(), then use dmaengine_prep_dma_cyclic() for STM32 MDMA. Use the new sg_list using SRAM buffer (in case of dmaengine_prep_slave_sg()) or, depending on the transfer direction, either the original DDR buffer (in case of DMA_DEV_TO_MEM) or the SRAM buffer (in case of DMA_MEM_TO_DEV), the source address being previously set with dmaengine_slave_config(). h](h)}(h5If you previously get descriptor (for STM32 DMA) withh]h5If you previously get descriptor (for STM32 DMA) with}(hjA hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMXhj= ubj )}(hhh](j )}(hMdmaengine_prep_slave_sg(), then use dmaengine_prep_slave_sg() for STM32 MDMA;h]h)}(hMdmaengine_prep_slave_sg(), then use dmaengine_prep_slave_sg() for STM32 MDMA;h]hMdmaengine_prep_slave_sg(), then use dmaengine_prep_slave_sg() for STM32 MDMA;}(hjV hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMZhjR ubah}(h]h ]h"]h$]h&]uh1j hjO ubj )}(hRdmaengine_prep_dma_cyclic(), then use dmaengine_prep_dma_cyclic() for STM32 MDMA. h]h)}(hQdmaengine_prep_dma_cyclic(), then use dmaengine_prep_dma_cyclic() for STM32 MDMA.h]hQdmaengine_prep_dma_cyclic(), then use dmaengine_prep_dma_cyclic() for STM32 MDMA.}(hjn hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM\hjj ubah}(h]h ]h"]h$]h&]uh1j hjO ubeh}(h]h ]h"]h$]h&]j j uh1j hhhMZhj= ubh)}(hX'Use the new sg_list using SRAM buffer (in case of dmaengine_prep_slave_sg()) or, depending on the transfer direction, either the original DDR buffer (in case of DMA_DEV_TO_MEM) or the SRAM buffer (in case of DMA_MEM_TO_DEV), the source address being previously set with dmaengine_slave_config().h]hX'Use the new sg_list using SRAM buffer (in case of dmaengine_prep_slave_sg()) or, depending on the transfer direction, either the original DDR buffer (in case of DMA_DEV_TO_MEM) or the SRAM buffer (in case of DMA_MEM_TO_DEV), the source address being previously set with dmaengine_slave_config().}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM_hj= ubeh}(h]h ]h"]h$]h&]uh1hhhhMXhjA ubh)}(h**4. Submit both transactions**h]j)}(hj h]h4. Submit both transactions}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhhhMdhjA ubh)}(hXBefore submitting your transactions, you may need to define on which descriptor you want a callback to be called at the end of the transfer (dmaengine_prep_slave_sg()) or the period (dmaengine_prep_dma_cyclic()). Depending on the direction, set the callback on the descriptor that finishes the overall transfer: * DMA_DEV_TO_MEM: set the callback on the "MDMA" descriptor * DMA_MEM_TO_DEV: set the callback on the "DMA" descriptor Then, submit the descriptors whatever the order, with dmaengine_tx_submit(). h](h)}(hX7Before submitting your transactions, you may need to define on which descriptor you want a callback to be called at the end of the transfer (dmaengine_prep_slave_sg()) or the period (dmaengine_prep_dma_cyclic()). Depending on the direction, set the callback on the descriptor that finishes the overall transfer:h]hX7Before submitting your transactions, you may need to define on which descriptor you want a callback to be called at the end of the transfer (dmaengine_prep_slave_sg()) or the period (dmaengine_prep_dma_cyclic()). Depending on the direction, set the callback on the descriptor that finishes the overall transfer:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMfhj ubj )}(hhh](j )}(h9DMA_DEV_TO_MEM: set the callback on the "MDMA" descriptorh]h)}(hj h]h=DMA_DEV_TO_MEM: set the callback on the “MDMA” descriptor}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMlhj ubah}(h]h ]h"]h$]h&]uh1j hj ubj )}(h9DMA_MEM_TO_DEV: set the callback on the "DMA" descriptor h]h)}(h8DMA_MEM_TO_DEV: set the callback on the "DMA" descriptorh]h5. Issue pending requests (and wait for callback notification)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhMqhjA ubh)}(huAs STM32 MDMA channel transfer is triggered by STM32 DMA, you must issue STM32 MDMA channel before STM32 DMA channel.h]huAs STM32 MDMA channel transfer is triggered by STM32 DMA, you must issue STM32 MDMA channel before STM32 DMA channel.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMshjA ubh)}(hpIf any, your callback will be called to warn you about the end of the overall transfer or the period completion.h]hpIf any, your callback will be called to warn you about the end of the overall transfer or the period completion.}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMvhjA ubh)}(hX7Don't forget to terminate both channels. STM32 DMA channel is configured in cyclic Double-Buffer mode so it won't be disabled by HW, you need to terminate it. STM32 MDMA channel will be stopped by HW in case of sg transfer, but not in case of cyclic transfer. You can terminate it whatever the kind of transfer.h]hX;Don’t forget to terminate both channels. STM32 DMA channel is configured in cyclic Double-Buffer mode so it won’t be disabled by HW, you need to terminate it. STM32 MDMA channel will be stopped by HW in case of sg transfer, but not in case of cyclic transfer. You can terminate it whatever the kind of transfer.}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMyhjA ubh)}(h7**STM32 DMA-MDMA chaining DMA_MEM_TO_DEV special case**h]j)}(hjTh]h3STM32 DMA-MDMA chaining DMA_MEM_TO_DEV special case}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&]uh1hhhhM~hjA ubh)}(hXSTM32 DMA-MDMA chaining in DMA_MEM_TO_DEV is a special case. Indeed, the STM32 MDMA feeds the SRAM buffer with the DDR data, and the STM32 DMA reads data from SRAM buffer. So some data (the first period) have to be copied in SRAM buffer when the STM32 DMA starts to read.h]hXSTM32 DMA-MDMA chaining in DMA_MEM_TO_DEV is a special case. Indeed, the STM32 MDMA feeds the SRAM buffer with the DDR data, and the STM32 DMA reads data from SRAM buffer. So some data (the first period) have to be copied in SRAM buffer when the STM32 DMA starts to read.}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjA ubh)}(hX^A trick could be pausing the STM32 DMA channel (that will raise a Transfer Complete signal, triggering the STM32 MDMA channel), but the first data read by the STM32 DMA could be "wrong". The proper way is to prepare the first SRAM period with dmaengine_prep_dma_memcpy(). Then this first period should be "removed" from the sg or the cyclic transfer.h]hXfA trick could be pausing the STM32 DMA channel (that will raise a Transfer Complete signal, triggering the STM32 MDMA channel), but the first data read by the STM32 DMA could be “wrong”. The proper way is to prepare the first SRAM period with dmaengine_prep_dma_memcpy(). Then this first period should be “removed” from the sg or the cyclic transfer.}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjA ubh)}(hDue to this complexity, rather use the STM32 DMA-MDMA chaining for DMA_DEV_TO_MEM and keep the "classic" DMA usage for DMA_MEM_TO_DEV, unless you're not afraid.h]hDue to this complexity, rather use the STM32 DMA-MDMA chaining for DMA_DEV_TO_MEM and keep the “classic” DMA usage for DMA_MEM_TO_DEV, unless you’re not afraid.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjA ubeh}(h]h ]h"]h$]h&]uh1hhhhKhj0 hhubeh}(h]@driver-updates-for-stm32-dma-mdma-chaining-support-in-foo-driverah ]h"]@driver updates for stm32 dma-mdma chaining support in foo driverah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Resourcesh]h Resources}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hApplication note, datasheet and reference manual are available on ST website (STM32MP1_). Dedicated focus on three application notes (AN5224_, AN4031_ & AN5001_) dealing with STM32 DMAMUX, STM32 DMA and STM32 MDMA. h](h)}(hYApplication note, datasheet and reference manual are available on ST website (STM32MP1_).h](hNApplication note, datasheet and reference manual are available on ST website (}(hjhhhNhNubh reference)}(h STM32MP1_h]hSTM32MP1}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameSTM32MP1refuriKhttps://www.st.com/en/microcontrollers-microprocessors/stm32mp1-series.htmluh1jhjresolvedKubh).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubh)}(h|Dedicated focus on three application notes (AN5224_, AN4031_ & AN5001_) dealing with STM32 DMAMUX, STM32 DMA and STM32 MDMA.h](h,Dedicated focus on three application notes (}(hjhhhNhNubj)}(hAN5224_h]hAN5224}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameAN5224jqhttps://www.st.com/resource/en/application_note/an5224-stm32-dmamux-the-dma-request-router-stmicroelectronics.pdfuh1jhjjKubh, }(hjhhhNhNubj)}(hAN4031_h]hAN4031}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameAN4031jhttps://www.st.com/resource/en/application_note/dm00046011-using-the-stm32f2-stm32f4-and-stm32f7-series-dma-controller-stmicroelectronics.pdfuh1jhjjKubh & }(hjhhhNhNubj)}(hAN5001_h]hAN5001}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameAN5001jhttps://www.st.com/resource/en/application_note/an5001-stm32cube-expansion-package-for-stm32h7-series-mdma-stmicroelectronics.pdfuh1jhjjKubh6) dealing with STM32 DMAMUX, STM32 DMA and STM32 MDMA.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubhtarget)}(hY.. _STM32MP1: https://www.st.com/en/microcontrollers-microprocessors/stm32mp1-series.htmlh]h}(h]stm32mp1ah ]h"]stm32mp1ah$]h&]jjuh1j0hMhjhhhh referencedKubj1)}(h}.. _AN5224: https://www.st.com/resource/en/application_note/an5224-stm32-dmamux-the-dma-request-router-stmicroelectronics.pdfh]h}(h]an5224ah ]h"]an5224ah$]h&]jjuh1j0hMhjhhhhj>Kubj1)}(h.. _AN4031: https://www.st.com/resource/en/application_note/dm00046011-using-the-stm32f2-stm32f4-and-stm32f7-series-dma-controller-stmicroelectronics.pdfh]h}(h]an4031ah ]h"]an4031ah$]h&]jj uh1j0hMhjhhhhj>Kubj1)}(h.. _AN5001: https://www.st.com/resource/en/application_note/an5001-stm32cube-expansion-package-for-stm32h7-series-mdma-stmicroelectronics.pdfh]h}(h]an5001ah ]h"]an5001ah$]h&]jjuh1j0hMhjhhhhj>Kubh field_list)}(hhh]hfield)}(hhh](h field_name)}(hAuthorsh]hAuthors}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jmhjjhhhKubh field_body)}(hhh]h}(h]h ]h"]h$]h&]uh1j}hjjubeh}(h]h ]h"]h$]h&]uh1jhhhhMhjehhubah}(h]h ]h"]h$]h&]uh1jchjhhhhhMubj )}(hhh]j )}(h-Amelie Delaunay h]h)}(hjh](hAmelie Delaunay <}(hjhhhNhNubj)}(hamelie.delaunay@foss.st.comh]hamelie.delaunay@foss.st.com}(hjhhhNhNubah}(h]h ]h"]h$]h&]refuri"mailto:amelie.delaunay@foss.st.comuh1jhjubh>}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjhhhhhNubah}(h]h ]h"]h$]h&]j -uh1j hhhMhjhhubeh}(h] resourcesah ]h"] resourcesah$]h&]uh1hhhhhhhhMubeh}(h]stm32-dma-mdma-chainingah ]h"]stm32 dma-mdma chainingah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksj$footnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjerror_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}(stm32mp1]jaan5224]jaan4031]jaan5001]jaurefids}nameids}(jjjjj j j- j* jjjjj;j8jHjEjTjQj`j]u nametypes}(jjj j- jjj;jHjTj`uh}(jhjhj jj* j jj0 jjj8j2jEj?jQjKj]jWjd ju footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}j KsRparse_messages]transform_messages] transformerN include_log] decorationNhhub.