sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget)/translations/zh_CN/admin-guide/media/imxmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget)/translations/zh_TW/admin-guide/media/imxmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget)/translations/it_IT/admin-guide/media/imxmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget)/translations/ja_JP/admin-guide/media/imxmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget)/translations/ko_KR/admin-guide/media/imxmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget)/translations/sp_SP/admin-guide/media/imxmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhcomment)}(h SPDX-License-Identifier: GPL-2.0h]h SPDX-License-Identifier: GPL-2.0}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhhhC/var/lib/git/docbuild/linux/Documentation/admin-guide/media/imx.rsthKubhsection)}(hhh](htitle)}(hi.MX Video Capture Driverh]hi.MX Video Capture Driver}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Introductionh]h Introduction}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh paragraph)}(hThe Freescale i.MX5/6 contains an Image Processing Unit (IPU), which handles the flow of image frames to and from capture devices and display devices.h]hThe Freescale i.MX5/6 contains an Image Processing Unit (IPU), which handles the flow of image frames to and from capture devices and display devices.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hhhhubh)}(hDFor image capture, the IPU contains the following internal subunits:h]hDFor image capture, the IPU contains the following internal subunits:}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hhhhubh bullet_list)}(hhh](h list_item)}(hImage DMA Controller (IDMAC)h]h)}(hjh]hImage DMA Controller (IDMAC)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubah}(h]h ]h"]h$]h&]uh1hhhhhhhhNubh)}(hCamera Serial Interface (CSI)h]h)}(hjh]hCamera Serial Interface (CSI)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1hhhhhhhhNubh)}(hImage Converter (IC)h]h)}(hj/h]hImage Converter (IC)}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj-ubah}(h]h ]h"]h$]h&]uh1hhhhhhhhNubh)}(h#Sensor Multi-FIFO Controller (SMFC)h]h)}(hjFh]h#Sensor Multi-FIFO Controller (SMFC)}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjDubah}(h]h ]h"]h$]h&]uh1hhhhhhhhNubh)}(hImage Rotator (IRT)h]h)}(hj]h]hImage Rotator (IRT)}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj[ubah}(h]h ]h"]h$]h&]uh1hhhhhhhhNubh)}(h/Video De-Interlacing or Combining Block (VDIC) h]h)}(h.Video De-Interlacing or Combining Block (VDIC)h]h.Video De-Interlacing or Combining Block (VDIC)}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjrubah}(h]h ]h"]h$]h&]uh1hhhhhhhhNubeh}(h]h ]h"]h$]h&]bullet-uh1hhhhKhhhhubh)}(hXThe IDMAC is the DMA controller for transfer of image frames to and from memory. Various dedicated DMA channels exist for both video capture and display paths. During transfer, the IDMAC is also capable of vertical image flip, 8x8 block transfer (see IRT description), pixel component re-ordering (for example UYVY to YUYV) within the same colorspace, and packed <--> planar conversion. The IDMAC can also perform a simple de-interlacing by interweaving even and odd lines during transfer (without motion compensation which requires the VDIC).h]hXThe IDMAC is the DMA controller for transfer of image frames to and from memory. Various dedicated DMA channels exist for both video capture and display paths. During transfer, the IDMAC is also capable of vertical image flip, 8x8 block transfer (see IRT description), pixel component re-ordering (for example UYVY to YUYV) within the same colorspace, and packed <--> planar conversion. The IDMAC can also perform a simple de-interlacing by interweaving even and odd lines during transfer (without motion compensation which requires the VDIC).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hThe CSI is the backend capture unit that interfaces directly with camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 buses.h]hThe CSI is the backend capture unit that interfaces directly with camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 buses.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hThe IC handles color-space conversion, resizing (downscaling and upscaling), horizontal flip, and 90/270 degree rotation operations.h]hThe IC handles color-space conversion, resizing (downscaling and upscaling), horizontal flip, and 90/270 degree rotation operations.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK"hhhhubh)}(hX\There are three independent "tasks" within the IC that can carry out conversions concurrently: pre-process encoding, pre-process viewfinder, and post-processing. Within each task, conversions are split into three sections: downsizing section, main section (upsizing, flip, colorspace conversion, and graphics plane combining), and rotation section.h]hX`There are three independent “tasks” within the IC that can carry out conversions concurrently: pre-process encoding, pre-process viewfinder, and post-processing. Within each task, conversions are split into three sections: downsizing section, main section (upsizing, flip, colorspace conversion, and graphics plane combining), and rotation section.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK%hhhhubh)}(hThe IPU time-shares the IC task operations. The time-slice granularity is one burst of eight pixels in the downsizing section, one image line in the main processing section, one image frame in the rotation section.h]hThe IPU time-shares the IC task operations. The time-slice granularity is one burst of eight pixels in the downsizing section, one image line in the main processing section, one image frame in the rotation section.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK+hhhhubh)}(hThe SMFC is composed of four independent FIFOs that each can transfer captured frames from sensors directly to memory concurrently via four IDMAC channels.h]hThe SMFC is composed of four independent FIFOs that each can transfer captured frames from sensors directly to memory concurrently via four IDMAC channels.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK/hhhhubh)}(hXThe IRT carries out 90 and 270 degree image rotation operations. The rotation operation is carried out on 8x8 pixel blocks at a time. This operation is supported by the IDMAC which handles the 8x8 block transfer along with block reordering, in coordination with vertical flip.h]hXThe IRT carries out 90 and 270 degree image rotation operations. The rotation operation is carried out on 8x8 pixel blocks at a time. This operation is supported by the IDMAC which handles the 8x8 block transfer along with block reordering, in coordination with vertical flip.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK3hhhhubh)}(hXxThe VDIC handles the conversion of interlaced video to progressive, with support for different motion compensation modes (low, medium, and high motion). The deinterlaced output frames from the VDIC can be sent to the IC pre-process viewfinder task for further conversions. The VDIC also contains a Combiner that combines two image planes, with alpha blending and color keying.h]hXxThe VDIC handles the conversion of interlaced video to progressive, with support for different motion compensation modes (low, medium, and high motion). The deinterlaced output frames from the VDIC can be sent to the IC pre-process viewfinder task for further conversions. The VDIC also contains a Combiner that combines two image planes, with alpha blending and color keying.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK8hhhhubh)}(hIn addition to the IPU internal subunits, there are also two units outside the IPU that are also involved in video capture on i.MX:h]hIn addition to the IPU internal subunits, there are also two units outside the IPU that are also involved in video capture on i.MX:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK?hhhhubh)}(hhh](h)}(hmMIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus interface. This is a Synopsys DesignWare core.h]h)}(hmMIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus interface. This is a Synopsys DesignWare core.h]hmMIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus interface. This is a Synopsys DesignWare core.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKBhjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubh)}(hTTwo video multiplexers for selecting among multiple sensor inputs to send to a CSI. h]h)}(hSTwo video multiplexers for selecting among multiple sensor inputs to send to a CSI.h]hSTwo video multiplexers for selecting among multiple sensor inputs to send to a CSI.}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKDhj+ubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1hhhhKBhhhhubh)}(h_For more info, refer to the latest versions of the i.MX5/6 reference manuals [#f1]_ and [#f2]_.h](hMFor more info, refer to the latest versions of the i.MX5/6 reference manuals }(hjIhhhNhNubhfootnote_reference)}(h[#f1]_h]h1}(hjShhhNhNubah}(h]id1ah ]h"]h$]h&]autoKrefidf1docnameadmin-guide/media/imxuh1jQhjIresolvedKubh and }(hjIhhhNhNubjR)}(h[#f2]_h]h2}(hjlhhhNhNubah}(h]id2ah ]h"]h$]h&]jbKjcf2jejfuh1jQhjIjgKubh.}(hjIhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKGhhhhubeh}(h] introductionah ]h"] introductionah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hFeaturesh]hFeatures}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKLubh)}(h,Some of the features of this driver include:h]h,Some of the features of this driver include:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKNhjhhubh)}(hhh](h)}(hMany different pipelines can be configured via media controller API, that correspond to the hardware video capture pipelines supported in the i.MX. h]h)}(hMany different pipelines can be configured via media controller API, that correspond to the hardware video capture pipelines supported in the i.MX.h]hMany different pipelines can be configured via media controller API, that correspond to the hardware video capture pipelines supported in the i.MX.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKPhjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubh)}(h6Supports parallel, BT.565, and MIPI CSI-2 interfaces. h]h)}(h5Supports parallel, BT.565, and MIPI CSI-2 interfaces.h]h5Supports parallel, BT.565, and MIPI CSI-2 interfaces.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKThjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubh)}(hzConcurrent independent streams, by configuring pipelines to multiple video capture interfaces using independent entities. h]h)}(hyConcurrent independent streams, by configuring pipelines to multiple video capture interfaces using independent entities.h]hyConcurrent independent streams, by configuring pipelines to multiple video capture interfaces using independent entities.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKVhjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubh)}(hgScaling, color-space conversion, horizontal and vertical flip, and image rotation via IC task subdevs. h]h)}(hfScaling, color-space conversion, horizontal and vertical flip, and image rotation via IC task subdevs.h]hfScaling, color-space conversion, horizontal and vertical flip, and image rotation via IC task subdevs.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKYhjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubh)}(hOMany pixel formats supported (RGB, packed and planar YUV, partial planar YUV). h]h)}(hNMany pixel formats supported (RGB, packed and planar YUV, partial planar YUV).h]hNMany pixel formats supported (RGB, packed and planar YUV, partial planar YUV).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK\hjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubh)}(hXOThe VDIC subdev supports motion compensated de-interlacing, with three motion compensation modes: low, medium, and high motion. Pipelines are defined that allow sending frames to the VDIC subdev directly from the CSI. There is also support in the future for sending frames to the VDIC from memory buffers via a output/mem2mem devices. h]h)}(hXNThe VDIC subdev supports motion compensated de-interlacing, with three motion compensation modes: low, medium, and high motion. Pipelines are defined that allow sending frames to the VDIC subdev directly from the CSI. There is also support in the future for sending frames to the VDIC from memory buffers via a output/mem2mem devices.h]hXNThe VDIC subdev supports motion compensated de-interlacing, with three motion compensation modes: low, medium, and high motion. Pipelines are defined that allow sending frames to the VDIC subdev directly from the CSI. There is also support in the future for sending frames to the VDIC from memory buffers via a output/mem2mem devices.}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK_hj(ubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubh)}(hrIncludes a Frame Interval Monitor (FIM) that can correct vertical sync problems with the ADV718x video decoders. h]h)}(hpIncludes a Frame Interval Monitor (FIM) that can correct vertical sync problems with the ADV718x video decoders.h]hpIncludes a Frame Interval Monitor (FIM) that can correct vertical sync problems with the ADV718x video decoders.}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKehj@ubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1hhhhKPhjhhubeh}(h]featuresah ]h"]featuresah$]h&]uh1hhhhhhhhKLubh)}(hhh](h)}(hTopologyh]hTopology}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhjfhhhhhKjubh)}(hThe following shows the media topologies for the i.MX6Q SabreSD and i.MX6Q SabreAuto. Refer to these diagrams in the entity descriptions in the next section.h]hThe following shows the media topologies for the i.MX6Q SabreSD and i.MX6Q SabreAuto. Refer to these diagrams in the entity descriptions in the next section.}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKlhjfhhubh)}(hXThe i.MX5/6 topologies can differ upstream from the IPUv3 CSI video multiplexers, but the internal IPUv3 topology downstream from there is common to all i.MX5/6 platforms. For example, the SabreSD, with the MIPI CSI-2 OV5640 sensor, requires the i.MX6 MIPI CSI-2 receiver. But the SabreAuto has only the ADV7180 decoder on a parallel bt.656 bus, and therefore does not require the MIPI CSI-2 receiver, so it is missing in its graph.h]hXThe i.MX5/6 topologies can differ upstream from the IPUv3 CSI video multiplexers, but the internal IPUv3 topology downstream from there is common to all i.MX5/6 platforms. For example, the SabreSD, with the MIPI CSI-2 OV5640 sensor, requires the i.MX6 MIPI CSI-2 receiver. But the SabreAuto has only the ADV7180 decoder on a parallel bt.656 bus, and therefore does not require the MIPI CSI-2 receiver, so it is missing in its graph.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKphjfhhubhtarget)}(h.. _imx6q_topology_graph:h]h}(h]h ]h"]h$]h&]jcimx6q-topology-graphuh1jhKxhjfhhhhubkfigure kernel_figure)}(hhh]hfigure)}(hhh](himage)}(h.. kernel-figure:: imx6q-sabresd.dot :alt: Diagram of the i.MX6Q SabreSD media pipeline topology :align: center Media pipeline graph on i.MX6Q SabreSD h]h}(h]h ]h"]h$]h&]alt5Diagram of the i.MX6Q SabreSD media pipeline topologyuri#admin-guide/media/imx6q-sabresd.dot candidates}*jsuh1jhjhhhKubhcaption)}(h&Media pipeline graph on i.MX6Q SabreSDh]h&Media pipeline graph on i.MX6Q SabreSD}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhK~hjubeh}(h]id3ah ]h"]h$]h&]aligncenteruh1jhjubah}(h]jah ]h"]imx6q_topology_graphah$]h&]uh1jhjfhhhhhNexpect_referenced_by_name}jjsexpect_referenced_by_id}jjsubj)}(hhh]j)}(hhh](j)}(h.. kernel-figure:: imx6q-sabreauto.dot :alt: Diagram of the i.MX6Q SabreAuto media pipeline topology :align: center Media pipeline graph on i.MX6Q SabreAuto h]h}(h]h ]h"]h$]h&]alt7Diagram of the i.MX6Q SabreAuto media pipeline topologyuri%admin-guide/media/imx6q-sabreauto.dotj}jjsuh1jhjhhhKubj)}(h(Media pipeline graph on i.MX6Q SabreAutoh]h(Media pipeline graph on i.MX6Q SabreAuto}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKhjubeh}(h]id4ah ]h"]h$]h&]jcenteruh1jhjubah}(h]h ]h"]h$]h&]uh1jhjfhhhhhNubeh}(h]topologyah ]h"]topologyah$]h&]uh1hhhhhhhhKjubh)}(hhh]h)}(hEntitiesh]hEntities}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubah}(h]entitiesah ]h"]entitiesah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(himx6-mipi-csi2h]himx6-mipi-csi2}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hhhhhKubh)}(hXBThis is the MIPI CSI-2 receiver entity. It has one sink pad to receive the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has four source pads, corresponding to the four MIPI CSI-2 demuxed virtual channel outputs. Multiple source pads can be enabled to independently stream from multiple virtual channels.h]hXBThis is the MIPI CSI-2 receiver entity. It has one sink pad to receive the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has four source pads, corresponding to the four MIPI CSI-2 demuxed virtual channel outputs. Multiple source pads can be enabled to independently stream from multiple virtual channels.}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubh)}(hX}This entity actually consists of two sub-blocks. One is the MIPI CSI-2 core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block is a "CSI-2 to IPU gasket". The gasket acts as a demultiplexer of the four virtual channels streams, providing four separate parallel buses containing each virtual channel that are routed to CSIs or video multiplexers as described below.h]hXThis entity actually consists of two sub-blocks. One is the MIPI CSI-2 core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block is a “CSI-2 to IPU gasket”. The gasket acts as a demultiplexer of the four virtual channels streams, providing four separate parallel buses containing each virtual channel that are routed to CSIs or video multiplexers as described below.}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubh)}(hOn i.MX6 solo/dual-lite, all four virtual channel buses are routed to two video multiplexers. Both CSI0 and CSI1 can receive any virtual channel, as selected by the video multiplexers.h]hOn i.MX6 solo/dual-lite, all four virtual channel buses are routed to two video multiplexers. Both CSI0 and CSI1 can receive any virtual channel, as selected by the video multiplexers.}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubh)}(hOn i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1 and IPU2-CSI0, respectively, and virtual channel 3 is routed to IPU2-CSI1 (again selected by a video mux).h]hOn i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1 and IPU2-CSI0, respectively, and virtual channel 3 is routed to IPU2-CSI1 (again selected by a video mux).}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj4hhubeh}(h]imx6-mipi-csi2ah ]h"]imx6-mipi-csi2ah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h ipuX_csiY_muxh]h ipuX_csiY_mux}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hX These are the video multiplexers. They have two or more sink pads to select from either camera sensors with a parallel interface, or from MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a single source pad that routes to a CSI (ipuX_csiY entities).h]hX These are the video multiplexers. They have two or more sink pads to select from either camera sensors with a parallel interface, or from MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a single source pad that routes to a CSI (ipuX_csiY entities).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXfOn i.MX6 solo/dual-lite, there are two video mux entities. One sits in front of IPU1-CSI0 to select between a parallel sensor and any of the four MIPI CSI-2 virtual channels (a total of five sink pads). The other mux sits in front of IPU1-CSI1, and again has five sink pads to select between a parallel sensor and any of the four MIPI CSI-2 virtual channels.h]hXfOn i.MX6 solo/dual-lite, there are two video mux entities. One sits in front of IPU1-CSI0 to select between a parallel sensor and any of the four MIPI CSI-2 virtual channels (a total of five sink pads). The other mux sits in front of IPU1-CSI1, and again has five sink pads to select between a parallel sensor and any of the four MIPI CSI-2 virtual channels.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hX$On i.MX6 Quad, there are two video mux entities. One sits in front of IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two sink pads).h]hX$On i.MX6 Quad, there are two video mux entities. One sits in front of IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two sink pads).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] ipux-csiy-muxah ]h"] ipux_csiy_muxah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h ipuX_csiYh]h ipuX_csiY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hThese are the CSI entities. They have a single sink pad receiving from either a video mux or from a MIPI CSI-2 virtual channel as described above.h]hThese are the CSI entities. They have a single sink pad receiving from either a video mux or from a MIPI CSI-2 virtual channel as described above.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hThis entity has two source pads. The first source pad can link directly to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links that require no IDMAC memory buffer transfer.h]hThis entity has two source pads. The first source pad can link directly to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links that require no IDMAC memory buffer transfer.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hWhen the direct source pad is routed to the ipuX_ic_prp entity, frames from the CSI can be processed by one or both of the IC pre-processing tasks.h]hWhen the direct source pad is routed to the ipuX_ic_prp entity, frames from the CSI can be processed by one or both of the IC pre-processing tasks.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hWhen the direct source pad is routed to the ipuX_vdic entity, the VDIC will carry out motion-compensated de-interlace using "high motion" mode (see description of ipuX_vdic entity).h]hWhen the direct source pad is routed to the ipuX_vdic entity, the VDIC will carry out motion-compensated de-interlace using “high motion” mode (see description of ipuX_vdic entity).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hThe second source pad sends video frames directly to memory buffers via the SMFC and an IDMAC channel, bypassing IC pre-processing. This source pad is routed to a capture device node, with a node name of the format "ipuX_csiY capture".h]hThe second source pad sends video frames directly to memory buffers via the SMFC and an IDMAC channel, bypassing IC pre-processing. This source pad is routed to a capture device node, with a node name of the format “ipuX_csiY capture”.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXNote that since the IDMAC source pad makes use of an IDMAC channel, pixel reordering within the same colorspace can be carried out by the IDMAC channel. For example, if the CSI sink pad is receiving in UYVY order, the capture device linked to the IDMAC source pad can capture in YUYV order. Also, if the CSI sink pad is receiving a packed YUV format, the capture device can capture a planar YUV format such as YUV420.h]hXNote that since the IDMAC source pad makes use of an IDMAC channel, pixel reordering within the same colorspace can be carried out by the IDMAC channel. For example, if the CSI sink pad is receiving in UYVY order, the capture device linked to the IDMAC source pad can capture in YUYV order. Also, if the CSI sink pad is receiving a packed YUV format, the capture device can capture a planar YUV format such as YUV420.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXThe IDMAC channel at the IDMAC source pad also supports simple interweave without motion compensation, which is activated if the source pad's field type is sequential top-bottom or bottom-top, and the requested capture interface field type is set to interlaced (t-b, b-t, or unqualified interlaced). The capture interface will enforce the same field order as the source pad field order (interlaced-bt if source pad is seq-bt, interlaced-tb if source pad is seq-tb).h]hXThe IDMAC channel at the IDMAC source pad also supports simple interweave without motion compensation, which is activated if the source pad’s field type is sequential top-bottom or bottom-top, and the requested capture interface field type is set to interlaced (t-b, b-t, or unqualified interlaced). The capture interface will enforce the same field order as the source pad field order (interlaced-bt if source pad is seq-bt, interlaced-tb if source pad is seq-tb).}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(h>For events produced by ipuX_csiY, see ref:`imx_api_ipuX_csiY`.h](h*For events produced by ipuX_csiY, see ref:}(hj;hhhNhNubhtitle_reference)}(h`imx_api_ipuX_csiY`h]himx_api_ipuX_csiY}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jChj;ubh.}(hj;hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] ipux-csiyah ]h"] ipux_csiyah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hCropping in ipuX_csiYh]hCropping in ipuX_csiY}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjehhhhhKubh)}(hThe CSI supports cropping the incoming raw sensor frames. This is implemented in the ipuX_csiY entities at the sink pad, using the crop selection subdev API.h]hThe CSI supports cropping the incoming raw sensor frames. This is implemented in the ipuX_csiY entities at the sink pad, using the crop selection subdev API.}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjehhubh)}(hThe CSI also supports fixed divide-by-two downscaling independently in width and height. This is implemented in the ipuX_csiY entities at the sink pad, using the compose selection subdev API.h]hThe CSI also supports fixed divide-by-two downscaling independently in width and height. This is implemented in the ipuX_csiY entities at the sink pad, using the compose selection subdev API.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjehhubh)}(hX0The output rectangle at the ipuX_csiY source pad is the same as the compose rectangle at the sink pad. So the source pad rectangle cannot be negotiated, it must be set using the compose selection API at sink pad (if /2 downscale is desired, otherwise source pad rectangle is equal to incoming rectangle).h]hX0The output rectangle at the ipuX_csiY source pad is the same as the compose rectangle at the sink pad. So the source pad rectangle cannot be negotiated, it must be set using the compose selection API at sink pad (if /2 downscale is desired, otherwise source pad rectangle is equal to incoming rectangle).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjehhubh)}(hTo give an example of crop and /2 downscale, this will crop a 1280x960 input frame to 640x480, and then /2 downscale in both dimensions to 320x240 (assumes ipu1_csi0 is linked to ipu1_csi0_mux):h]hTo give an example of crop and /2 downscale, this will crop a 1280x960 input frame to 640x480, and then /2 downscale in both dimensions to 320x240 (assumes ipu1_csi0 is linked to ipu1_csi0_mux):}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjehhubh literal_block)}(hmedia-ctl -V "'ipu1_csi0_mux':2[fmt:UYVY2X8/1280x960]" media-ctl -V "'ipu1_csi0':0[crop:(0,0)/640x480]" media-ctl -V "'ipu1_csi0':0[compose:(0,0)/320x240]"h]hmedia-ctl -V "'ipu1_csi0_mux':2[fmt:UYVY2X8/1280x960]" media-ctl -V "'ipu1_csi0':0[crop:(0,0)/640x480]" media-ctl -V "'ipu1_csi0':0[compose:(0,0)/320x240]"}hjsbah}(h]h ]h"]h$]h&]hhforcelanguagenonehighlight_args}uh1jhhhKhjehhubeh}(h]cropping-in-ipux-csiyah ]h"]cropping in ipux_csiyah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hFrame Skipping in ipuX_csiYh]hFrame Skipping in ipuX_csiY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXThe CSI supports frame rate decimation, via frame skipping. Frame rate decimation is specified by setting the frame intervals at sink and source pads. The ipuX_csiY entity then applies the best frame skip setting to the CSI to achieve the desired frame rate at the source pad.h]hXThe CSI supports frame rate decimation, via frame skipping. Frame rate decimation is specified by setting the frame intervals at sink and source pads. The ipuX_csiY entity then applies the best frame skip setting to the CSI to achieve the desired frame rate at the source pad.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hjThe following example reduces an assumed incoming 60 Hz frame rate by half at the IDMAC output source pad:h]hjThe following example reduces an assumed incoming 60 Hz frame rate by half at the IDMAC output source pad:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(hmmedia-ctl -V "'ipu1_csi0':0[fmt:UYVY2X8/640x480@1/60]" media-ctl -V "'ipu1_csi0':2[fmt:UYVY2X8/640x480@1/30]"h]hmmedia-ctl -V "'ipu1_csi0':0[fmt:UYVY2X8/640x480@1/60]" media-ctl -V "'ipu1_csi0':2[fmt:UYVY2X8/640x480@1/30]"}hjsbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMhjhhubeh}(h]frame-skipping-in-ipux-csiyah ]h"]frame skipping in ipux_csiyah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h#Frame Interval Monitor in ipuX_csiYh]h#Frame Interval Monitor in ipuX_csiY}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hSee ref:`imx_api_FIM`.h](hSee ref:}(hj!hhhNhNubjD)}(h `imx_api_FIM`h]h imx_api_FIM}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jChj!ubh.}(hj!hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]#frame-interval-monitor-in-ipux-csiyah ]h"]#frame interval monitor in ipux_csiyah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h ipuX_vdich]h ipuX_vdic}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIhhhhhMubh)}(hThe VDIC carries out motion compensated de-interlacing, with three motion compensation modes: low, medium, and high motion. The mode is specified with the menu control V4L2_CID_DEINTERLACING_MODE. The VDIC has two sink pads and a single source pad.h]hThe VDIC carries out motion compensated de-interlacing, with three motion compensation modes: low, medium, and high motion. The mode is specified with the menu control V4L2_CID_DEINTERLACING_MODE. The VDIC has two sink pads and a single source pad.}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjIhhubh)}(hxThe direct sink pad receives from an ipuX_csiY direct pad. With this link the VDIC can only operate in high motion mode.h]hxThe direct sink pad receives from an ipuX_csiY direct pad. With this link the VDIC can only operate in high motion mode.}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjIhhubh)}(hXHWhen the IDMAC sink pad is activated, it receives from an output or mem2mem device node. With this pipeline, the VDIC can also operate in low and medium modes, because these modes require receiving frames from memory buffers. Note that an output or mem2mem device is not implemented yet, so this sink pad currently has no links.h]hXHWhen the IDMAC sink pad is activated, it receives from an output or mem2mem device node. With this pipeline, the VDIC can also operate in low and medium modes, because these modes require receiving frames from memory buffers. Note that an output or mem2mem device is not implemented yet, so this sink pad currently has no links.}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjIhhubh)}(hBThe source pad routes to the IC pre-processing entity ipuX_ic_prp.h]hBThe source pad routes to the IC pre-processing entity ipuX_ic_prp.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM#hjIhhubeh}(h] ipux-vdicah ]h"] ipux_vdicah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h ipuX_ic_prph]h ipuX_ic_prp}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM&ubh)}(h|This is the IC pre-processing entity. It acts as a router, routing data from its sink pad to one or both of its source pads.h]h|This is the IC pre-processing entity. It acts as a router, routing data from its sink pad to one or both of its source pads.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM(hjhhubh)}(hmThis entity has a single sink pad. The sink pad can receive from the ipuX_csiY direct pad, or from ipuX_vdic.h]hmThis entity has a single sink pad. The sink pad can receive from the ipuX_csiY direct pad, or from ipuX_vdic.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM+hjhhubh)}(hXThis entity has two source pads. One source pad routes to the pre-process encode task entity (ipuX_ic_prpenc), the other to the pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads can be activated at the same time if the sink pad is receiving from ipuX_csiY. Only the source pad to the pre-process viewfinder task entity can be activated if the sink pad is receiving from ipuX_vdic (frames from the VDIC can only be processed by the pre-process viewfinder task).h]hXThis entity has two source pads. One source pad routes to the pre-process encode task entity (ipuX_ic_prpenc), the other to the pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads can be activated at the same time if the sink pad is receiving from ipuX_csiY. Only the source pad to the pre-process viewfinder task entity can be activated if the sink pad is receiving from ipuX_vdic (frames from the VDIC can only be processed by the pre-process viewfinder task).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM.hjhhubeh}(h] ipux-ic-prpah ]h"] ipux_ic_prpah$]h&]uh1hhhhhhhhM&ubh)}(hhh](h)}(hipuX_ic_prpench]hipuX_ic_prpenc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM7ubh)}(hThis is the IC pre-processing encode entity. It has a single sink pad from ipuX_ic_prp, and a single source pad. The source pad is routed to a capture device node, with a node name of the format "ipuX_ic_prpenc capture".h]hThis is the IC pre-processing encode entity. It has a single sink pad from ipuX_ic_prp, and a single source pad. The source pad is routed to a capture device node, with a node name of the format “ipuX_ic_prpenc capture”.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM9hjhhubh)}(hThis entity performs the IC pre-process encode task operations: color-space conversion, resizing (downscaling and upscaling), horizontal and vertical flip, and 90/270 degree rotation. Flip and rotation are provided via standard V4L2 controls.h]hThis entity performs the IC pre-process encode task operations: color-space conversion, resizing (downscaling and upscaling), horizontal and vertical flip, and 90/270 degree rotation. Flip and rotation are provided via standard V4L2 controls.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM>hjhhubh)}(hLike the ipuX_csiY IDMAC source, this entity also supports simple de-interlace without motion compensation, and pixel reordering.h]hLike the ipuX_csiY IDMAC source, this entity also supports simple de-interlace without motion compensation, and pixel reordering.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMChjhhubeh}(h]ipux-ic-prpencah ]h"]ipux_ic_prpencah$]h&]uh1hhhhhhhhM7ubh)}(hhh](h)}(h ipuX_ic_prpvfh]h ipuX_ic_prpvf}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMGubh)}(hThis is the IC pre-processing viewfinder entity. It has a single sink pad from ipuX_ic_prp, and a single source pad. The source pad is routed to a capture device node, with a node name of the format "ipuX_ic_prpvf capture".h]hThis is the IC pre-processing viewfinder entity. It has a single sink pad from ipuX_ic_prp, and a single source pad. The source pad is routed to a capture device node, with a node name of the format “ipuX_ic_prpvf capture”.}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhj hhubh)}(hThis entity is identical in operation to ipuX_ic_prpenc, with the same resizing and CSC operations and flip/rotation controls. It will receive and process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is receiving from ipuX_vdic.h]hThis entity is identical in operation to ipuX_ic_prpenc, with the same resizing and CSC operations and flip/rotation controls. It will receive and process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is receiving from ipuX_vdic.}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMNhj hhubh)}(hXLike the ipuX_csiY IDMAC source, this entity supports simple interweaving without motion compensation. However, note that if the ipuX_vdic is included in the pipeline (ipuX_ic_prp is receiving from ipuX_vdic), it's not possible to use interweave in ipuX_ic_prpvf, since the ipuX_vdic has already carried out de-interlacing (with motion compensation) and therefore the field type output from ipuX_vdic can only be none (progressive).h]hXLike the ipuX_csiY IDMAC source, this entity supports simple interweaving without motion compensation. However, note that if the ipuX_vdic is included in the pipeline (ipuX_ic_prp is receiving from ipuX_vdic), it’s not possible to use interweave in ipuX_ic_prpvf, since the ipuX_vdic has already carried out de-interlacing (with motion compensation) and therefore the field type output from ipuX_vdic can only be none (progressive).}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMShj hhubeh}(h] ipux-ic-prpvfah ]h"] ipux_ic_prpvfah$]h&]uh1hhhhhhhhMGubh)}(hhh](h)}(hCapture Pipelinesh]hCapture Pipelines}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjchhhhhM\ubh)}(hHThe following describe the various use-cases supported by the pipelines.h]hHThe following describe the various use-cases supported by the pipelines.}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM^hjchhubh)}(hThe links shown do not include the backend sensor, video mux, or mipi csi-2 receiver links. This depends on the type of sensor interface (parallel or mipi csi-2). So these pipelines begin with:h]hThe links shown do not include the backend sensor, video mux, or mipi csi-2 receiver links. This depends on the type of sensor interface (parallel or mipi csi-2). So these pipelines begin with:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM`hjchhubh)}(hsensor -> ipuX_csiY_mux -> ...h]hsensor -> ipuX_csiY_mux -> ...}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMdhjchhubh)}(hfor parallel sensors, or:h]hfor parallel sensors, or:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMfhjchhubh)}(h2sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ...h]h2sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ...}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhhjchhubh)}(hfor mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route to the video mux (ipuX_csiY_mux) before sending to the CSI, depending on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in parenthesis.h]hfor mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route to the video mux (ipuX_csiY_mux) before sending to the CSI, depending on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in parenthesis.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMjhjchhubeh}(h]capture-pipelinesah ]h"]capture pipelinesah$]h&]uh1hhhhhhhhM\ubh)}(hhh](h)}(hUnprocessed Video Capture:h]hUnprocessed Video Capture:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMpubh)}(hvSend frames directly from sensor to camera device interface node, with no conversions, via ipuX_csiY IDMAC source pad:h]hvSend frames directly from sensor to camera device interface node, with no conversions, via ipuX_csiY IDMAC source pad:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMrhjhhubh)}(h#-> ipuX_csiY:2 -> ipuX_csiY captureh]h#-> ipuX_csiY:2 -> ipuX_csiY capture}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMuhjhhubeh}(h]unprocessed-video-captureah ]h"]unprocessed video capture:ah$]h&]uh1hhhhhhhhMpubh)}(hhh](h)}(hIC Direct Conversions:h]hIC Direct Conversions:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMxubh)}(hThis pipeline uses the preprocess encode entity to route frames directly from the CSI to the IC, to carry out scaling up to 1024x1024 resolution, CSC, flipping, and image rotation:h]hThis pipeline uses the preprocess encode entity to route frames directly from the CSI to the IC, to carry out scaling up to 1024x1024 resolution, CSC, flipping, and image rotation:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMzhjhhubh)}(hQ-> ipuX_csiY:1 -> 0:ipuX_ic_prp:1 -> 0:ipuX_ic_prpenc:1 -> ipuX_ic_prpenc captureh]hQ-> ipuX_csiY:1 -> 0:ipuX_ic_prp:1 -> 0:ipuX_ic_prpenc:1 -> ipuX_ic_prpenc capture}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM~hjhhubeh}(h]ic-direct-conversionsah ]h"]ic direct conversions:ah$]h&]uh1hhhhhhhhMxubh)}(hhh](h)}(h Motion Compensated De-interlace:h]h Motion Compensated De-interlace:}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:hhhhhMubh)}(hThis pipeline routes frames from the CSI direct pad to the VDIC entity to support motion-compensated de-interlacing (high motion mode only), scaling up to 1024x1024, CSC, flip, and rotation:h]hThis pipeline routes frames from the CSI direct pad to the VDIC entity to support motion-compensated de-interlacing (high motion mode only), scaling up to 1024x1024, CSC, flip, and rotation:}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj:hhubh)}(h`-> ipuX_csiY:1 -> 0:ipuX_vdic:2 -> 0:ipuX_ic_prp:2 -> 0:ipuX_ic_prpvf:1 -> ipuX_ic_prpvf captureh]h`-> ipuX_csiY:1 -> 0:ipuX_vdic:2 -> 0:ipuX_ic_prp:2 -> 0:ipuX_ic_prpvf:1 -> ipuX_ic_prpvf capture}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj:hhubeh}(h]motion-compensated-de-interlaceah ]h"] motion compensated de-interlace:ah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h Usage Notesh]h Usage Notes}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjohhhhhMubh)}(hXTo aid in configuration and for backward compatibility with V4L2 applications that access controls only from video device nodes, the capture device interfaces inherit controls from the active entities in the current pipeline, so controls can be accessed either directly from the subdev or from the active capture device interface. For example, the FIM controls are available either from the ipuX_csiY subdevs or from the active capture device.h]hXTo aid in configuration and for backward compatibility with V4L2 applications that access controls only from video device nodes, the capture device interfaces inherit controls from the active entities in the current pipeline, so controls can be accessed either directly from the subdev or from the active capture device interface. For example, the FIM controls are available either from the ipuX_csiY subdevs or from the active capture device.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjohhubh)}(hGThe following are specific usage notes for the Sabre* reference boards:h]hGThe following are specific usage notes for the Sabre* reference boards:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjohhubeh}(h] usage-notesah ]h"] usage notesah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h'i.MX6Q SabreLite with OV5642 and OV5640h]h'i.MX6Q SabreLite with OV5642 and OV5640}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hThis platform requires the OmniVision OV5642 module with a parallel camera interface, and the OV5640 module with a MIPI CSI-2 interface. Both modules are available from Boundary Devices:h]hThis platform requires the OmniVision OV5642 module with a parallel camera interface, and the OV5640 module with a MIPI CSI-2 interface. Both modules are available from Boundary Devices:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hhh](h)}(h-https://boundarydevices.com/product/nit6x_5mph]h)}(hjh]h reference)}(hjh]h-https://boundarydevices.com/product/nit6x_5mp}(hjhhhNhNubah}(h]h ]h"]h$]h&]refurijuh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubh)}(h3https://boundarydevices.com/product/nit6x_5mp_mipi h]h)}(h2https://boundarydevices.com/product/nit6x_5mp_mipih]j)}(hjh]h2https://boundarydevices.com/product/nit6x_5mp_mipi}(hjhhhNhNubah}(h]h ]h"]h$]h&]refurijuh1jhjubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1hhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1hhhhMhjhhubh)}(hkNote that if only one camera module is available, the other sensor node can be disabled in the device tree.h]hkNote that if only one camera module is available, the other sensor node can be disabled in the device tree.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hThe OV5642 module is connected to the parallel bus input on the i.MX internal video mux to IPU1 CSI0. It's i2c bus connects to i2c bus 2.h]hThe OV5642 module is connected to the parallel bus input on the i.MX internal video mux to IPU1 CSI0. It’s i2c bus connects to i2c bus 2.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXThe MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2 receiver, and the four virtual channel outputs from the receiver are routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1, vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and OV5640 must not share the same i2c slave address.h]hXThe MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2 receiver, and the four virtual channel outputs from the receiver are routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1, vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and OV5640 must not share the same i2c slave address.}(hj- hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXXThe following basic example configures unprocessed video capture pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and the OV5640, transmitting on MIPI CSI-2 virtual channel 1 (which is imx6-mipi-csi2 pad 2), is routed to ipu1_csi1. Both sensors are configured to output 640x480, and the OV5642 outputs YUYV2X8, the OV5640 UYVY2X8:h]hXXThe following basic example configures unprocessed video capture pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and the OV5640, transmitting on MIPI CSI-2 virtual channel 1 (which is imx6-mipi-csi2 pad 2), is routed to ipu1_csi1. Both sensors are configured to output 640x480, and the OV5642 outputs YUYV2X8, the OV5640 UYVY2X8:}(hj; hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj)}(hXN# Setup links for OV5642 media-ctl -l "'ov5642 1-0042':0 -> 'ipu1_csi0_mux':1[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" # Setup links for OV5640 media-ctl -l "'ov5640 1-0040':0 -> 'imx6-mipi-csi2':0[1]" media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]" media-ctl -l "'ipu1_csi1':2 -> 'ipu1_csi1 capture':0[1]" # Configure pads for OV5642 pipeline media-ctl -V "'ov5642 1-0042':0 [fmt:YUYV2X8/640x480 field:none]" media-ctl -V "'ipu1_csi0_mux':2 [fmt:YUYV2X8/640x480 field:none]" media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/640x480 field:none]" # Configure pads for OV5640 pipeline media-ctl -V "'ov5640 1-0040':0 [fmt:UYVY2X8/640x480 field:none]" media-ctl -V "'imx6-mipi-csi2':2 [fmt:UYVY2X8/640x480 field:none]" media-ctl -V "'ipu1_csi1':2 [fmt:AYUV32/640x480 field:none]"h]hXN# Setup links for OV5642 media-ctl -l "'ov5642 1-0042':0 -> 'ipu1_csi0_mux':1[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" # Setup links for OV5640 media-ctl -l "'ov5640 1-0040':0 -> 'imx6-mipi-csi2':0[1]" media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]" media-ctl -l "'ipu1_csi1':2 -> 'ipu1_csi1 capture':0[1]" # Configure pads for OV5642 pipeline media-ctl -V "'ov5642 1-0042':0 [fmt:YUYV2X8/640x480 field:none]" media-ctl -V "'ipu1_csi0_mux':2 [fmt:YUYV2X8/640x480 field:none]" media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/640x480 field:none]" # Configure pads for OV5640 pipeline media-ctl -V "'ov5640 1-0040':0 [fmt:UYVY2X8/640x480 field:none]" media-ctl -V "'imx6-mipi-csi2':2 [fmt:UYVY2X8/640x480 field:none]" media-ctl -V "'ipu1_csi1':2 [fmt:AYUV32/640x480 field:none]"}hjI sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMhjhhubh)}(hStreaming can then begin independently on the capture device nodes "ipu1_csi0 capture" and "ipu1_csi1 capture". The v4l2-ctl tool can be used to select any supported YUV pixelformat on the capture device nodes, including planar.h]hStreaming can then begin independently on the capture device nodes “ipu1_csi0 capture” and “ipu1_csi1 capture”. The v4l2-ctl tool can be used to select any supported YUV pixelformat on the capture device nodes, including planar.}(hjY hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]'i-mx6q-sabrelite-with-ov5642-and-ov5640ah ]h"]'i.mx6q sabrelite with ov5642 and ov5640ah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h%i.MX6Q SabreAuto with ADV7180 decoderh]h%i.MX6Q SabreAuto with ADV7180 decoder}(hjr hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjo hhhhhMubh)}(hOn the i.MX6Q SabreAuto, an on-board ADV7180 SD decoder is connected to the parallel bus input on the internal video mux to IPU1 CSI0.h]hOn the i.MX6Q SabreAuto, an on-board ADV7180 SD decoder is connected to the parallel bus input on the internal video mux to IPU1 CSI0.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjo hhubh)}(hX+The following example configures a pipeline to capture from the ADV7180 video decoder, assuming NTSC 720x480 input signals, using simple interweave (unconverted and without motion compensation). The adv7180 must output sequential or alternating fields (field type 'seq-bt' for NTSC, or 'alternate'):h]hX3The following example configures a pipeline to capture from the ADV7180 video decoder, assuming NTSC 720x480 input signals, using simple interweave (unconverted and without motion compensation). The adv7180 must output sequential or alternating fields (field type ‘seq-bt’ for NTSC, or ‘alternate’):}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjo hhubj)}(hX# Setup links media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" # Configure pads media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]" media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480]" media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]" # Configure "ipu1_csi0 capture" interface (assumed at /dev/video4) v4l2-ctl -d4 --set-fmt-video=field=interlaced_bth]hX# Setup links media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" # Configure pads media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]" media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480]" media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]" # Configure "ipu1_csi0 capture" interface (assumed at /dev/video4) v4l2-ctl -d4 --set-fmt-video=field=interlaced_bt}hj sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMhjo hhubh)}(hStreaming can then begin on /dev/video4. The v4l2-ctl tool can also be used to select any supported YUV pixelformat on /dev/video4.h]hStreaming can then begin on /dev/video4. The v4l2-ctl tool can also be used to select any supported YUV pixelformat on /dev/video4.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjo hhubh)}(hXThis example configures a pipeline to capture from the ADV7180 video decoder, assuming PAL 720x576 input signals, with Motion Compensated de-interlacing. The adv7180 must output sequential or alternating fields (field type 'seq-tb' for PAL, or 'alternate').h]hX This example configures a pipeline to capture from the ADV7180 video decoder, assuming PAL 720x576 input signals, with Motion Compensated de-interlacing. The adv7180 must output sequential or alternating fields (field type ‘seq-tb’ for PAL, or ‘alternate’).}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjo hhubj)}(hXC# Setup links media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]" media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]" media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]" media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]" # Configure pads media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]" media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x576]" media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]" media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]" media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]" media-ctl -V "'ipu1_ic_prpvf':1 [fmt:AYUV32/720x576 field:none]" # Configure "ipu1_ic_prpvf capture" interface (assumed at /dev/video2) v4l2-ctl -d2 --set-fmt-video=field=noneh]hXC# Setup links media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]" media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]" media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]" media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]" # Configure pads media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]" media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x576]" media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]" media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]" media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]" media-ctl -V "'ipu1_ic_prpvf':1 [fmt:AYUV32/720x576 field:none]" # Configure "ipu1_ic_prpvf capture" interface (assumed at /dev/video2) v4l2-ctl -d2 --set-fmt-video=field=none}hj sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMhjo hhubh)}(hStreaming can then begin on /dev/video2. The v4l2-ctl tool can also be used to select any supported YUV pixelformat on /dev/video2.h]hStreaming can then begin on /dev/video2. The v4l2-ctl tool can also be used to select any supported YUV pixelformat on /dev/video2.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjo hhubh)}(h[This platform accepts Composite Video analog inputs to the ADV7180 on Ain1 (connector J42).h]h[This platform accepts Composite Video analog inputs to the ADV7180 on Ain1 (connector J42).}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjo hhubeh}(h]%i-mx6q-sabreauto-with-adv7180-decoderah ]h"]%i.mx6q sabreauto with adv7180 decoderah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h&i.MX6DL SabreAuto with ADV7180 decoderh]h&i.MX6DL SabreAuto with ADV7180 decoder}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhM ubh)}(hOn the i.MX6DL SabreAuto, an on-board ADV7180 SD decoder is connected to the parallel bus input on the internal video mux to IPU1 CSI0.h]hOn the i.MX6DL SabreAuto, an on-board ADV7180 SD decoder is connected to the parallel bus input on the internal video mux to IPU1 CSI0.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj hhubh)}(hX+The following example configures a pipeline to capture from the ADV7180 video decoder, assuming NTSC 720x480 input signals, using simple interweave (unconverted and without motion compensation). The adv7180 must output sequential or alternating fields (field type 'seq-bt' for NTSC, or 'alternate'):h]hX3The following example configures a pipeline to capture from the ADV7180 video decoder, assuming NTSC 720x480 input signals, using simple interweave (unconverted and without motion compensation). The adv7180 must output sequential or alternating fields (field type ‘seq-bt’ for NTSC, or ‘alternate’):}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj)}(hX# Setup links media-ctl -l "'adv7180 4-0021':0 -> 'ipu1_csi0_mux':4[1]" media-ctl -l "'ipu1_csi0_mux':5 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" # Configure pads media-ctl -V "'adv7180 4-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]" media-ctl -V "'ipu1_csi0_mux':5 [fmt:UYVY2X8/720x480]" media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]" # Configure "ipu1_csi0 capture" interface (assumed at /dev/video0) v4l2-ctl -d0 --set-fmt-video=field=interlaced_bth]hX# Setup links media-ctl -l "'adv7180 4-0021':0 -> 'ipu1_csi0_mux':4[1]" media-ctl -l "'ipu1_csi0_mux':5 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" # Configure pads media-ctl -V "'adv7180 4-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]" media-ctl -V "'ipu1_csi0_mux':5 [fmt:UYVY2X8/720x480]" media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]" # Configure "ipu1_csi0 capture" interface (assumed at /dev/video0) v4l2-ctl -d0 --set-fmt-video=field=interlaced_bt}hj) sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMhj hhubh)}(hStreaming can then begin on /dev/video0. The v4l2-ctl tool can also be used to select any supported YUV pixelformat on /dev/video0.h]hStreaming can then begin on /dev/video0. The v4l2-ctl tool can also be used to select any supported YUV pixelformat on /dev/video0.}(hj9 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM"hj hhubh)}(hXThis example configures a pipeline to capture from the ADV7180 video decoder, assuming PAL 720x576 input signals, with Motion Compensated de-interlacing. The adv7180 must output sequential or alternating fields (field type 'seq-tb' for PAL, or 'alternate').h]hX This example configures a pipeline to capture from the ADV7180 video decoder, assuming PAL 720x576 input signals, with Motion Compensated de-interlacing. The adv7180 must output sequential or alternating fields (field type ‘seq-tb’ for PAL, or ‘alternate’).}(hjG hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM%hj hhubj)}(hXC# Setup links media-ctl -l "'adv7180 4-0021':0 -> 'ipu1_csi0_mux':4[1]" media-ctl -l "'ipu1_csi0_mux':5 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]" media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]" media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]" media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]" # Configure pads media-ctl -V "'adv7180 4-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]" media-ctl -V "'ipu1_csi0_mux':5 [fmt:UYVY2X8/720x576]" media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]" media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]" media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]" media-ctl -V "'ipu1_ic_prpvf':1 [fmt:AYUV32/720x576 field:none]" # Configure "ipu1_ic_prpvf capture" interface (assumed at /dev/video2) v4l2-ctl -d2 --set-fmt-video=field=noneKh]hXC# Setup links media-ctl -l "'adv7180 4-0021':0 -> 'ipu1_csi0_mux':4[1]" media-ctl -l "'ipu1_csi0_mux':5 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]" media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]" media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]" media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]" # Configure pads media-ctl -V "'adv7180 4-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]" media-ctl -V "'ipu1_csi0_mux':5 [fmt:UYVY2X8/720x576]" media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]" media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]" media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]" media-ctl -V "'ipu1_ic_prpvf':1 [fmt:AYUV32/720x576 field:none]" # Configure "ipu1_ic_prpvf capture" interface (assumed at /dev/video2) v4l2-ctl -d2 --set-fmt-video=field=none}hjU sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhM*hj hhubh)}(hStreaming can then begin on /dev/video2. The v4l2-ctl tool can also be used to select any supported YUV pixelformat on /dev/video2.h]hStreaming can then begin on /dev/video2. The v4l2-ctl tool can also be used to select any supported YUV pixelformat on /dev/video2.}(hje hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM=hj hhubh)}(h[This platform accepts Composite Video analog inputs to the ADV7180 on Ain1 (connector J42).h]h[This platform accepts Composite Video analog inputs to the ADV7180 on Ain1 (connector J42).}(hjs hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM@hj hhubeh}(h]&i-mx6dl-sabreauto-with-adv7180-decoderah ]h"]&i.mx6dl sabreauto with adv7180 decoderah$]h&]uh1hhhhhhhhM ubh)}(hhh](h)}(h%i.MX6Q SabreSD with MIPI CSI-2 OV5640h]h%i.MX6Q SabreSD with MIPI CSI-2 OV5640}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMDubh)}(hSimilarly to i.MX6Q SabreLite, the i.MX6Q SabreSD supports a parallel interface OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642 connects to i2c bus 1 and the OV5640 to i2c bus 2.h]hSimilarly to i.MX6Q SabreLite, the i.MX6Q SabreSD supports a parallel interface OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642 connects to i2c bus 1 and the OV5640 to i2c bus 2.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhj hhubh)}(hX]The device tree for SabreSD includes OF graphs for both the parallel OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled. The OV5640 module connects to MIPI connector J5. The NXP part number for the OV5640 module that connects to the SabreSD board is H120729.h]hX]The device tree for SabreSD includes OF graphs for both the parallel OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled. The OV5640 module connects to MIPI connector J5. The NXP part number for the OV5640 module that connects to the SabreSD board is H120729.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhj hhubh)}(hThe following example configures unprocessed video capture pipeline to capture from the OV5640, transmitting on MIPI CSI-2 virtual channel 0:h]hThe following example configures unprocessed video capture pipeline to capture from the OV5640, transmitting on MIPI CSI-2 virtual channel 0:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMPhj hhubj)}(hX# Setup links media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]" media-ctl -l "'imx6-mipi-csi2':1 -> 'ipu1_csi0_mux':0[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" # Configure pads media-ctl -V "'ov5640 1-003c':0 [fmt:UYVY2X8/640x480]" media-ctl -V "'imx6-mipi-csi2':1 [fmt:UYVY2X8/640x480]" media-ctl -V "'ipu1_csi0_mux':0 [fmt:UYVY2X8/640x480]" media-ctl -V "'ipu1_csi0':0 [fmt:AYUV32/640x480]"h]hX# Setup links media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]" media-ctl -l "'imx6-mipi-csi2':1 -> 'ipu1_csi0_mux':0[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" # Configure pads media-ctl -V "'ov5640 1-003c':0 [fmt:UYVY2X8/640x480]" media-ctl -V "'imx6-mipi-csi2':1 [fmt:UYVY2X8/640x480]" media-ctl -V "'ipu1_csi0_mux':0 [fmt:UYVY2X8/640x480]" media-ctl -V "'ipu1_csi0':0 [fmt:AYUV32/640x480]"}hj sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMShj hhubh)}(hStreaming can then begin on "ipu1_csi0 capture" node. The v4l2-ctl tool can be used to select any supported pixelformat on the capture device node.h]hStreaming can then begin on “ipu1_csi0 capture” node. The v4l2-ctl tool can be used to select any supported pixelformat on the capture device node.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM`hj hhubh)}(hNTo determine what is the /dev/video node correspondent to "ipu1_csi0 capture":h]hRTo determine what is the /dev/video node correspondent to “ipu1_csi0 capture”:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMdhj hhubj)}(h,media-ctl -e "ipu1_csi0 capture" /dev/video0h]h,media-ctl -e "ipu1_csi0 capture" /dev/video0}hj sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMghj hhubh)}(h2/dev/video0 is the streaming element in this case.h]h2/dev/video0 is the streaming element in this case.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMlhj hhubh)}(h$Starting the streaming via v4l2-ctl:h]h$Starting the streaming via v4l2-ctl:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMnhj hhubj)}(h%v4l2-ctl --stream-mmap -d /dev/video0h]h%v4l2-ctl --stream-mmap -d /dev/video0}hj sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMphj hhubh)}(hLStarting the streaming via Gstreamer and sending the content to the display:h]hLStarting the streaming via Gstreamer and sending the content to the display:}(hj, hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMthj hhubj)}(h3gst-launch-1.0 v4l2src device=/dev/video0 ! kmssinkh]h3gst-launch-1.0 v4l2src device=/dev/video0 ! kmssink}hj: sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMvhj hhubh)}(hThe following example configures a direct conversion pipeline to capture from the OV5640, transmitting on MIPI CSI-2 virtual channel 0. It also shows colorspace conversion and scaling at IC output.h]hThe following example configures a direct conversion pipeline to capture from the OV5640, transmitting on MIPI CSI-2 virtual channel 0. It also shows colorspace conversion and scaling at IC output.}(hjJ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMzhj hhubj)}(hX # Setup links media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]" media-ctl -l "'imx6-mipi-csi2':1 -> 'ipu1_csi0_mux':0[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':1 -> 'ipu1_ic_prp':0[1]" media-ctl -l "'ipu1_ic_prp':1 -> 'ipu1_ic_prpenc':0[1]" media-ctl -l "'ipu1_ic_prpenc':1 -> 'ipu1_ic_prpenc capture':0[1]" # Configure pads media-ctl -V "'ov5640 1-003c':0 [fmt:UYVY2X8/640x480]" media-ctl -V "'imx6-mipi-csi2':1 [fmt:UYVY2X8/640x480]" media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/640x480]" media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/640x480]" media-ctl -V "'ipu1_ic_prp':1 [fmt:AYUV32/640x480]" media-ctl -V "'ipu1_ic_prpenc':1 [fmt:ARGB8888_1X32/800x600]" # Set a format at the capture interface v4l2-ctl -d /dev/video1 --set-fmt-video=pixelformat=RGB3h]hX # Setup links media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]" media-ctl -l "'imx6-mipi-csi2':1 -> 'ipu1_csi0_mux':0[1]" media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" media-ctl -l "'ipu1_csi0':1 -> 'ipu1_ic_prp':0[1]" media-ctl -l "'ipu1_ic_prp':1 -> 'ipu1_ic_prpenc':0[1]" media-ctl -l "'ipu1_ic_prpenc':1 -> 'ipu1_ic_prpenc capture':0[1]" # Configure pads media-ctl -V "'ov5640 1-003c':0 [fmt:UYVY2X8/640x480]" media-ctl -V "'imx6-mipi-csi2':1 [fmt:UYVY2X8/640x480]" media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/640x480]" media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/640x480]" media-ctl -V "'ipu1_ic_prp':1 [fmt:AYUV32/640x480]" media-ctl -V "'ipu1_ic_prpenc':1 [fmt:ARGB8888_1X32/800x600]" # Set a format at the capture interface v4l2-ctl -d /dev/video1 --set-fmt-video=pixelformat=RGB3}hjX sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhM~hj hhubh)}(h:Streaming can then begin on "ipu1_ic_prpenc capture" node.h]h>Streaming can then begin on “ipu1_ic_prpenc capture” node.}(hjh hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(hSTo determine what is the /dev/video node correspondent to "ipu1_ic_prpenc capture":h]hWTo determine what is the /dev/video node correspondent to “ipu1_ic_prpenc capture”:}(hjv hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj)}(h1media-ctl -e "ipu1_ic_prpenc capture" /dev/video1h]h1media-ctl -e "ipu1_ic_prpenc capture" /dev/video1}hj sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMhj hhubh)}(h2/dev/video1 is the streaming element in this case.h]h2/dev/video1 is the streaming element in this case.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubh)}(h$Starting the streaming via v4l2-ctl:h]h$Starting the streaming via v4l2-ctl:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj)}(h%v4l2-ctl --stream-mmap -d /dev/video1h]h%v4l2-ctl --stream-mmap -d /dev/video1}hj sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMhj hhubh)}(hLStarting the streaming via Gstreamer and sending the content to the display:h]hLStarting the streaming via Gstreamer and sending the content to the display:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubj)}(h3gst-launch-1.0 v4l2src device=/dev/video1 ! kmssinkh]h3gst-launch-1.0 v4l2src device=/dev/video1 ! kmssink}hj sbah}(h]h ]h"]h$]h&]hhjjnonej}uh1jhhhMhj hhubeh}(h]%i-mx6q-sabresd-with-mipi-csi-2-ov5640ah ]h"]%i.mx6q sabresd with mipi csi-2 ov5640ah$]h&]uh1hhhhhhhhMDubh)}(hhh](h)}(h Known Issuesh]h Known Issues}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubhenumerated_list)}(hhh]h)}(hXuWhen using 90 or 270 degree rotation control at capture resolutions near the IC resizer limit of 1024x1024, and combined with planar pixel formats (YUV420, YUV422p), frame capture will often fail with no end-of-frame interrupts from the IDMAC channel. To work around this, use lower resolution and/or packed formats (YUYV, RGB3, etc.) when 90 or 270 rotations are needed. h]h)}(hXsWhen using 90 or 270 degree rotation control at capture resolutions near the IC resizer limit of 1024x1024, and combined with planar pixel formats (YUV420, YUV422p), frame capture will often fail with no end-of-frame interrupts from the IDMAC channel. To work around this, use lower resolution and/or packed formats (YUYV, RGB3, etc.) when 90 or 270 rotations are needed.h]hXsWhen using 90 or 270 degree rotation control at capture resolutions near the IC resizer limit of 1024x1024, and combined with planar pixel formats (YUV420, YUV422p), frame capture will often fail with no end-of-frame interrupts from the IDMAC channel. To work around this, use lower resolution and/or packed formats (YUYV, RGB3, etc.) when 90 or 270 rotations are needed.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1hhj hhhhhNubah}(h]h ]h"]h$]h&]enumtypearabicprefixhsuffix.uh1j hj hhhhhMubeh}(h] known-issuesah ]h"] known issuesah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h File listh]h File list}(hj* hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj' hhhhhMubh)}(hHdrivers/staging/media/imx/ include/media/imx.h include/linux/imx-media.hh]hHdrivers/staging/media/imx/ include/media/imx.h include/linux/imx-media.h}(hj8 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj' hhubeh}(h] file-listah ]h"] file listah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h Referencesh]h References}(hjQ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjN hhhhhMubhfootnote)}(hJhttp://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6DQRM.pdfh](hlabel)}(hhh]h1}(hjg hhhNhNubah}(h]h ]h"]h$]h&]uh1je hja hhhNhNubh)}(hjc h]j)}(hjc h]hJhttp://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6DQRM.pdf}(hjw hhhNhNubah}(h]h ]h"]h$]h&]refurijc uh1jhjt ubah}(h]h ]h"]h$]h&]uh1hhhhMhja ubeh}(h]jdah ]h"]f1ah$]h&]j]ajbKjejfuh1j_ hhhMhjN hhubj` )}(hMhttp://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6SDLRM.pdf h](jf )}(hhh]h2}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1je hj hhhNhNubh)}(hKhttp://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6SDLRM.pdfh]j)}(hj h]hKhttp://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6SDLRM.pdf}(hj hhhNhNubah}(h]h ]h"]h$]h&]refurij uh1jhj ubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubeh}(h]j{ah ]h"]f2ah$]h&]jvajbKjejfuh1j_ hhhMhjN hhubeh}(h] referencesah ]h"] referencesah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(hAuthorsh]hAuthors}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMubh)}(hhh](h)}(h.Steve Longerbeam h]h)}(hj h](hSteve Longerbeam <}(hj hhhNhNubj)}(hsteve_longerbeam@mentor.comh]hsteve_longerbeam@mentor.com}(hj hhhNhNubah}(h]h ]h"]h$]h&]refuri"mailto:steve_longerbeam@mentor.comuh1jhj ubh>}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1hhj hhhhhNubh)}(h%Philipp Zabel h]h)}(hj h](hPhilipp Zabel <}(hj hhhNhNubj)}(hkernel@pengutronix.deh]hkernel@pengutronix.de}(hj hhhNhNubah}(h]h ]h"]h$]h&]refurimailto:kernel@pengutronix.deuh1jhj ubh>}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1hhj hhhhhNubh)}(h%Russell King h]h)}(h$Russell King h](hRussell King <}(hj8 hhhNhNubj)}(hlinux@armlinux.org.ukh]hlinux@armlinux.org.uk}(hj@ hhhNhNubah}(h]h ]h"]h$]h&]refurimailto:linux@armlinux.org.ukuh1jhj8 ubh>}(hj8 hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj4 ubah}(h]h ]h"]h$]h&]uh1hhj hhhhhNubeh}(h]h ]h"]h$]h&]jjuh1hhhhMhj hhubh)}(h,Copyright (C) 2012-2017 Mentor Graphics Inc.h]h,Copyright (C) 2012-2017 Mentor Graphics Inc.}(hjf hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj hhubeh}(h]authorsah ]h"]authorsah$]h&]uh1hhhhhhhhMubeh}(h]i-mx-video-capture-driverah ]h"]i.mx video capture driverah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerj error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}(f1]jSaf2]jlaurefids}(j]jajd]jSaj{]jlaunameids}(j j~ jjjcj`jjjjj1j.jjjjjbj_jjj j jFjCjjjjjjj`j]jjjjj7j4jljijjjl ji j j j j j j j$ j! jK jH j j j jdj j{jy jv u nametypes}(j jjcjjj1jjjbjj jFjjjj`jjj7jljjl j j j j$ jK j j j jy uh}(j~ hjhj]jSjvjlj`jjjfjjj.jjj4jjj_jjjej jjCjjjIjjjjj]j jjcjjj4jjij:jjoji jj jo j j j j j! j jH j' j jN jdja j{j jv j jjjju footnote_refs}(j ]jSaj ]jlau citation_refs} autofootnotes](ja j eautofootnote_refs](jSjlesymbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}j KsRparse_messages]transform_messages]hsystem_message)}(hhh]h)}(hhh]h:Hyperlink target "imx6q-topology-graph" is not referenced.}hjsbah}(h]h ]h"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]levelKtypeINFOsourcehlineKxuh1juba transformerN include_log] decorationNhhub.