€•±ÂŒsphinx.addnodes”Œdocument”“”)”}”(Œ rawsource”Œ”Œchildren”]”(Œ translations”Œ LanguagesNode”“”)”}”(hhh]”(hŒ pending_xref”“”)”}”(hhh]”Œdocutils.nodes”ŒText”“”ŒChinese (Simplified)”…””}”Œparent”hsbaŒ attributes”}”(Œids”]”Œclasses”]”Œnames”]”Œdupnames”]”Œbackrefs”]”Œ refdomain”Œstd”Œreftype”Œdoc”Œ reftarget”Œ2/translations/zh_CN/sound/designs/compress-offload”Œmodname”NŒ classname”NŒ refexplicit”ˆuŒtagname”hhh ubh)”}”(hhh]”hŒChinese (Traditional)”…””}”hh2sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/zh_TW/sound/designs/compress-offload”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒItalian”…””}”hhFsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/it_IT/sound/designs/compress-offload”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒJapanese”…””}”hhZsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/ja_JP/sound/designs/compress-offload”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒKorean”…””}”hhnsbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/ko_KR/sound/designs/compress-offload”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒPortuguese (Brazilian)”…””}”hh‚sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/pt_BR/sound/designs/compress-offload”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubh)”}”(hhh]”hŒSpanish”…””}”hh–sbah}”(h]”h ]”h"]”h$]”h&]”Œ refdomain”h)Œreftype”h+Œ reftarget”Œ2/translations/sp_SP/sound/designs/compress-offload”Œmodname”NŒ classname”NŒ refexplicit”ˆuh1hhh ubeh}”(h]”h ]”h"]”h$]”h&]”Œcurrent_language”ŒEnglish”uh1h hhŒ _document”hŒsource”NŒline”NubhŒsection”“”)”}”(hhh]”(hŒtitle”“”)”}”(hŒALSA Compress-Offload API”h]”hŒALSA Compress-Offload API”…””}”(hh¼h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhh·h²hh³ŒL/var/lib/git/docbuild/linux/Documentation/sound/designs/compress-offload.rst”h´KubhŒ paragraph”“”)”}”(hŒ;Pierre-Louis.Bossart ”h]”(hŒPierre-Louis.Bossart <”…””}”(hhÍh²hh³Nh´NubhŒ reference”“”)”}”(hŒ$pierre-louis.bossart@linux.intel.com”h]”hŒ$pierre-louis.bossart@linux.intel.com”…””}”(hh×h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”Œrefuri”Œ+mailto:pierre-louis.bossart@linux.intel.com”uh1hÕhhÍubhŒ>”…””}”(hhÍh²hh³Nh´Nubeh}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Khh·h²hubhÌ)”}”(hŒ'Vinod Koul ”h]”(hŒ Vinod Koul <”…””}”(hhñh²hh³Nh´NubhÖ)”}”(hŒvinod.koul@linux.intel.com”h]”hŒvinod.koul@linux.intel.com”…””}”(hhùh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”Œrefuri”Œ!mailto:vinod.koul@linux.intel.com”uh1hÕhhñubhŒ>”…””}”(hhñh²hh³Nh´Nubeh}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Khh·h²hubh¶)”}”(hhh]”(h»)”}”(hŒOverview”h]”hŒOverview”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhjh²hh³hÊh´K ubhÌ)”}”(hŒúSince its early days, the ALSA API was defined with PCM support or constant bitrates payloads such as IEC61937 in mind. Arguments and returned values in frames are the norm, making it a challenge to extend the existing API to compressed data streams.”h]”hŒúSince its early days, the ALSA API was defined with PCM support or constant bitrates payloads such as IEC61937 in mind. Arguments and returned values in frames are the norm, making it a challenge to extend the existing API to compressed data streams.”…””}”(hj$h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K hjh²hubhÌ)”}”(hX–In recent years, audio digital signal processors (DSP) were integrated in system-on-chip designs, and DSPs are also integrated in audio codecs. Processing compressed data on such DSPs results in a dramatic reduction of power consumption compared to host-based processing. Support for such hardware has not been very good in Linux, mostly because of a lack of a generic API available in the mainline kernel.”h]”hX–In recent years, audio digital signal processors (DSP) were integrated in system-on-chip designs, and DSPs are also integrated in audio codecs. Processing compressed data on such DSPs results in a dramatic reduction of power consumption compared to host-based processing. Support for such hardware has not been very good in Linux, mostly because of a lack of a generic API available in the mainline kernel.”…””}”(hj2h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Khjh²hubhÌ)”}”(hŒÅRather than requiring a compatibility break with an API change of the ALSA PCM interface, a new 'Compressed Data' API is introduced to provide a control and data-streaming interface for audio DSPs.”h]”hŒÉRather than requiring a compatibility break with an API change of the ALSA PCM interface, a new ‘Compressed Data’ API is introduced to provide a control and data-streaming interface for audio DSPs.”…””}”(hj@h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Khjh²hubhÌ)”}”(hŒßThe design of this API was inspired by the 2-year experience with the Intel Moorestown SOC, with many corrections required to upstream the API in the mainline kernel instead of the staging tree and make it usable by others.”h]”hŒßThe design of this API was inspired by the 2-year experience with the Intel Moorestown SOC, with many corrections required to upstream the API in the mainline kernel instead of the staging tree and make it usable by others.”…””}”(hjNh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Khjh²hubeh}”(h]”Œoverview”ah ]”h"]”Œoverview”ah$]”h&]”uh1hµhh·h²hh³hÊh´K ubh¶)”}”(hhh]”(h»)”}”(hŒ Requirements”h]”hŒ Requirements”…””}”(hjgh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhjdh²hh³hÊh´K$ubhÌ)”}”(hŒThe main requirements are:”h]”hŒThe main requirements are:”…””}”(hjuh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K%hjdh²hubhŒ bullet_list”“”)”}”(hhh]”(hŒ list_item”“”)”}”(hXÄseparation between byte counts and time. Compressed formats may have a header per file, per frame, or no header at all. The payload size may vary from frame-to-frame. As a result, it is not possible to estimate reliably the duration of audio buffers when handling compressed data. Dedicated mechanisms are required to allow for reliable audio-video synchronization, which requires precise reporting of the number of samples rendered at any given time. ”h]”hÌ)”}”(hXÃseparation between byte counts and time. Compressed formats may have a header per file, per frame, or no header at all. The payload size may vary from frame-to-frame. As a result, it is not possible to estimate reliably the duration of audio buffers when handling compressed data. Dedicated mechanisms are required to allow for reliable audio-video synchronization, which requires precise reporting of the number of samples rendered at any given time.”h]”hXÃseparation between byte counts and time. Compressed formats may have a header per file, per frame, or no header at all. The payload size may vary from frame-to-frame. As a result, it is not possible to estimate reliably the duration of audio buffers when handling compressed data. Dedicated mechanisms are required to allow for reliable audio-video synchronization, which requires precise reporting of the number of samples rendered at any given time.”…””}”(hjŽh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K'hjŠubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj…h²hh³hÊh´Nubj‰)”}”(hXpHandling of multiple formats. PCM data only requires a specification of the sampling rate, number of channels and bits per sample. In contrast, compressed data comes in a variety of formats. Audio DSPs may also provide support for a limited number of audio encoders and decoders embedded in firmware, or may support more choices through dynamic download of libraries. ”h]”hÌ)”}”(hXoHandling of multiple formats. PCM data only requires a specification of the sampling rate, number of channels and bits per sample. In contrast, compressed data comes in a variety of formats. Audio DSPs may also provide support for a limited number of audio encoders and decoders embedded in firmware, or may support more choices through dynamic download of libraries.”h]”hXoHandling of multiple formats. PCM data only requires a specification of the sampling rate, number of channels and bits per sample. In contrast, compressed data comes in a variety of formats. Audio DSPs may also provide support for a limited number of audio encoders and decoders embedded in firmware, or may support more choices through dynamic download of libraries.”…””}”(hj¦h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K/hj¢ubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj…h²hh³hÊh´Nubj‰)”}”(hŒÔFocus on main formats. This API provides support for the most popular formats used for audio and video capture and playback. It is likely that as audio compression technology advances, new formats will be added. ”h]”hÌ)”}”(hŒÓFocus on main formats. This API provides support for the most popular formats used for audio and video capture and playback. It is likely that as audio compression technology advances, new formats will be added.”h]”hŒÓFocus on main formats. This API provides support for the most popular formats used for audio and video capture and playback. It is likely that as audio compression technology advances, new formats will be added.”…””}”(hj¾h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K6hjºubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj…h²hh³hÊh´Nubj‰)”}”(hXHandling of multiple configurations. Even for a given format like AAC, some implementations may support AAC multichannel but HE-AAC stereo. Likewise WMA10 level M3 may require too much memory and cpu cycles. The new API needs to provide a generic way of listing these formats. ”h]”hÌ)”}”(hXHandling of multiple configurations. Even for a given format like AAC, some implementations may support AAC multichannel but HE-AAC stereo. Likewise WMA10 level M3 may require too much memory and cpu cycles. The new API needs to provide a generic way of listing these formats.”h]”hXHandling of multiple configurations. Even for a given format like AAC, some implementations may support AAC multichannel but HE-AAC stereo. Likewise WMA10 level M3 may require too much memory and cpu cycles. The new API needs to provide a generic way of listing these formats.”…””}”(hjÖh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K;hjÒubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj…h²hh³hÊh´Nubj‰)”}”(hXFRendering/Grabbing only. This API does not provide any means of hardware acceleration, where PCM samples are provided back to user-space for additional processing. This API focuses instead on streaming compressed data to a DSP, with the assumption that the decoded samples are routed to a physical output or logical back-end. ”h]”hÌ)”}”(hXERendering/Grabbing only. This API does not provide any means of hardware acceleration, where PCM samples are provided back to user-space for additional processing. This API focuses instead on streaming compressed data to a DSP, with the assumption that the decoded samples are routed to a physical output or logical back-end.”h]”hXERendering/Grabbing only. This API does not provide any means of hardware acceleration, where PCM samples are provided back to user-space for additional processing. This API focuses instead on streaming compressed data to a DSP, with the assumption that the decoded samples are routed to a physical output or logical back-end.”…””}”(hjîh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KAhjêubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj…h²hh³hÊh´Nubj‰)”}”(hXˆComplexity hiding. Existing user-space multimedia frameworks all have existing enums/structures for each compressed format. This new API assumes the existence of a platform-specific compatibility layer to expose, translate and make use of the capabilities of the audio DSP, eg. Android HAL or PulseAudio sinks. By construction, regular applications are not supposed to make use of this API. ”h]”hÌ)”}”(hX†Complexity hiding. Existing user-space multimedia frameworks all have existing enums/structures for each compressed format. This new API assumes the existence of a platform-specific compatibility layer to expose, translate and make use of the capabilities of the audio DSP, eg. Android HAL or PulseAudio sinks. By construction, regular applications are not supposed to make use of this API.”h]”hX†Complexity hiding. Existing user-space multimedia frameworks all have existing enums/structures for each compressed format. This new API assumes the existence of a platform-specific compatibility layer to expose, translate and make use of the capabilities of the audio DSP, eg. Android HAL or PulseAudio sinks. By construction, regular applications are not supposed to make use of this API.”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KGhjubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj…h²hh³hÊh´Nubeh}”(h]”h ]”h"]”h$]”h&]”Œbullet”Œ-”uh1jƒh³hÊh´K'hjdh²hubeh}”(h]”Œ requirements”ah ]”h"]”Œ requirements”ah$]”h&]”uh1hµhh·h²hh³hÊh´K$ubh¶)”}”(hhh]”(h»)”}”(hŒDesign”h]”hŒDesign”…””}”(hj-h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhj*h²hh³hÊh´KPubhÌ)”}”(hŒ¯The new API shares a number of concepts with the PCM API for flow control. Start, pause, resume, drain and stop commands have the same semantics no matter what the content is.”h]”hŒ¯The new API shares a number of concepts with the PCM API for flow control. Start, pause, resume, drain and stop commands have the same semantics no matter what the content is.”…””}”(hj;h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KQhj*h²hubhÌ)”}”(hŒ‘The concept of memory ring buffer divided in a set of fragments is borrowed from the ALSA PCM API. However, only sizes in bytes can be specified.”h]”hŒ‘The concept of memory ring buffer divided in a set of fragments is borrowed from the ALSA PCM API. However, only sizes in bytes can be specified.”…””}”(hjIh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KUhj*h²hubhÌ)”}”(hŒ8Seeks/trick modes are assumed to be handled by the host.”h]”hŒ8Seeks/trick modes are assumed to be handled by the host.”…””}”(hjWh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KYhj*h²hubhÌ)”}”(hŒ‹The notion of rewinds/forwards is not supported. Data committed to the ring buffer cannot be invalidated, except when dropping all buffers.”h]”hŒ‹The notion of rewinds/forwards is not supported. Data committed to the ring buffer cannot be invalidated, except when dropping all buffers.”…””}”(hjeh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K[hj*h²hubhÌ)”}”(hXThe Compressed Data API does not make any assumptions on how the data is transmitted to the audio DSP. DMA transfers from main memory to an embedded audio cluster or to a SPI interface for external DSPs are possible. As in the ALSA PCM case, a core set of routines is exposed; each driver implementer will have to write support for a set of mandatory routines and possibly make use of optional ones.”h]”hXThe Compressed Data API does not make any assumptions on how the data is transmitted to the audio DSP. DMA transfers from main memory to an embedded audio cluster or to a SPI interface for external DSPs are possible. As in the ALSA PCM case, a core set of routines is exposed; each driver implementer will have to write support for a set of mandatory routines and possibly make use of optional ones.”…””}”(hjsh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K^hj*h²hubhÌ)”}”(hŒThe main additions are”h]”hŒThe main additions are”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Kehj*h²hubhŒdefinition_list”“”)”}”(hhh]”(hŒdefinition_list_item”“”)”}”(hŒ¯get_caps This routine returns the list of audio formats supported. Querying the codecs on a capture stream will return encoders, decoders will be listed for playback streams. ”h]”(hŒterm”“”)”}”(hŒget_caps”h]”hŒget_caps”…””}”(hjœh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jšh³hÊh´Kjhj–ubhŒ definition”“”)”}”(hhh]”hÌ)”}”(hŒ¥This routine returns the list of audio formats supported. Querying the codecs on a capture stream will return encoders, decoders will be listed for playback streams.”h]”hŒ¥This routine returns the list of audio formats supported. Querying the codecs on a capture stream will return encoders, decoders will be listed for playback streams.”…””}”(hj¯h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Khhj¬ubah}”(h]”h ]”h"]”h$]”h&]”uh1jªhj–ubeh}”(h]”h ]”h"]”h$]”h&]”uh1j”h³hÊh´Kjhj‘ubj•)”}”(hXÑget_codec_caps For each codec, this routine returns a list of capabilities. The intent is to make sure all the capabilities correspond to valid settings, and to minimize the risks of configuration failures. For example, for a complex codec such as AAC, the number of channels supported may depend on a specific profile. If the capabilities were exposed with a single descriptor, it may happen that a specific combination of profiles/channels/formats may not be supported. Likewise, embedded DSPs have limited memory and cpu cycles, it is likely that some implementations make the list of capabilities dynamic and dependent on existing workloads. In addition to codec settings, this routine returns the minimum buffer size handled by the implementation. This information can be a function of the DMA buffer sizes, the number of bytes required to synchronize, etc, and can be used by userspace to define how much needs to be written in the ring buffer before playback can start. ”h]”(j›)”}”(hŒget_codec_caps”h]”hŒget_codec_caps”…””}”(hjÍh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jšh³hÊh´K{hjÉubj«)”}”(hhh]”hÌ)”}”(hXÁFor each codec, this routine returns a list of capabilities. The intent is to make sure all the capabilities correspond to valid settings, and to minimize the risks of configuration failures. For example, for a complex codec such as AAC, the number of channels supported may depend on a specific profile. If the capabilities were exposed with a single descriptor, it may happen that a specific combination of profiles/channels/formats may not be supported. Likewise, embedded DSPs have limited memory and cpu cycles, it is likely that some implementations make the list of capabilities dynamic and dependent on existing workloads. In addition to codec settings, this routine returns the minimum buffer size handled by the implementation. This information can be a function of the DMA buffer sizes, the number of bytes required to synchronize, etc, and can be used by userspace to define how much needs to be written in the ring buffer before playback can start.”h]”hXÁFor each codec, this routine returns a list of capabilities. The intent is to make sure all the capabilities correspond to valid settings, and to minimize the risks of configuration failures. For example, for a complex codec such as AAC, the number of channels supported may depend on a specific profile. If the capabilities were exposed with a single descriptor, it may happen that a specific combination of profiles/channels/formats may not be supported. Likewise, embedded DSPs have limited memory and cpu cycles, it is likely that some implementations make the list of capabilities dynamic and dependent on existing workloads. In addition to codec settings, this routine returns the minimum buffer size handled by the implementation. This information can be a function of the DMA buffer sizes, the number of bytes required to synchronize, etc, and can be used by userspace to define how much needs to be written in the ring buffer before playback can start.”…””}”(hjÞh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KmhjÛubah}”(h]”h ]”h"]”h$]”h&]”uh1jªhjÉubeh}”(h]”h ]”h"]”h$]”h&]”uh1j”h³hÊh´K{hj‘h²hubj•)”}”(hŒïset_params This routine sets the configuration chosen for a specific codec. The most important field in the parameters is the codec type; in most cases decoders will ignore other fields, while encoders will strictly comply to the settings ”h]”(j›)”}”(hŒ set_params”h]”hŒ set_params”…””}”(hjüh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jšh³hÊh´Khjøubj«)”}”(hhh]”hÌ)”}”(hŒãThis routine sets the configuration chosen for a specific codec. The most important field in the parameters is the codec type; in most cases decoders will ignore other fields, while encoders will strictly comply to the settings”h]”hŒãThis routine sets the configuration chosen for a specific codec. The most important field in the parameters is the codec type; in most cases decoders will ignore other fields, while encoders will strictly comply to the settings”…””}”(hj h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K~hj ubah}”(h]”h ]”h"]”h$]”h&]”uh1jªhjøubeh}”(h]”h ]”h"]”h$]”h&]”uh1j”h³hÊh´Khj‘h²hubj•)”}”(hŒ{get_params This routines returns the actual settings used by the DSP. Changes to the settings should remain the exception. ”h]”(j›)”}”(hŒ get_params”h]”hŒ get_params”…””}”(hj+h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jšh³hÊh´K…hj'ubj«)”}”(hhh]”hÌ)”}”(hŒoThis routines returns the actual settings used by the DSP. Changes to the settings should remain the exception.”h]”hŒoThis routines returns the actual settings used by the DSP. Changes to the settings should remain the exception.”…””}”(hj<h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´K„hj9ubah}”(h]”h ]”h"]”h$]”h&]”uh1jªhj'ubeh}”(h]”h ]”h"]”h$]”h&]”uh1j”h³hÊh´K…hj‘h²hubj•)”}”(hX]get_timestamp The timestamp becomes a multiple field structure. It lists the number of bytes transferred, the number of samples processed and the number of samples rendered/grabbed. All these values can be used to determine the average bitrate, figure out if the ring buffer needs to be refilled or the delay due to decoding/encoding/io on the DSP. ”h]”(j›)”}”(hŒ get_timestamp”h]”hŒ get_timestamp”…””}”(hjZh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jšh³hÊh´KŒhjVubj«)”}”(hhh]”hÌ)”}”(hXNThe timestamp becomes a multiple field structure. It lists the number of bytes transferred, the number of samples processed and the number of samples rendered/grabbed. All these values can be used to determine the average bitrate, figure out if the ring buffer needs to be refilled or the delay due to decoding/encoding/io on the DSP.”h]”hXNThe timestamp becomes a multiple field structure. It lists the number of bytes transferred, the number of samples processed and the number of samples rendered/grabbed. All these values can be used to determine the average bitrate, figure out if the ring buffer needs to be refilled or the delay due to decoding/encoding/io on the DSP.”…””}”(hjkh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Kˆhjhubah}”(h]”h ]”h"]”h$]”h&]”uh1jªhjVubeh}”(h]”h ]”h"]”h$]”h&]”uh1j”h³hÊh´KŒhj‘h²hubeh}”(h]”h ]”h"]”h$]”h&]”uh1jhj*h²hh³hÊh´NubhÌ)”}”(hXKNote that the list of codecs/profiles/modes was derived from the OpenMAX AL specification instead of reinventing the wheel. Modifications include: - Addition of FLAC and IEC formats - Merge of encoder/decoder capabilities - Profiles/modes listed as bitmasks to make descriptors more compact - Addition of set_params for decoders (missing in OpenMAX AL) - Addition of AMR/AMR-WB encoding modes (missing in OpenMAX AL) - Addition of format information for WMA - Addition of encoding options when required (derived from OpenMAX IL) - Addition of rateControlSupported (missing in OpenMAX AL)”h]”hXKNote that the list of codecs/profiles/modes was derived from the OpenMAX AL specification instead of reinventing the wheel. Modifications include: - Addition of FLAC and IEC formats - Merge of encoder/decoder capabilities - Profiles/modes listed as bitmasks to make descriptors more compact - Addition of set_params for decoders (missing in OpenMAX AL) - Addition of AMR/AMR-WB encoding modes (missing in OpenMAX AL) - Addition of format information for WMA - Addition of encoding options when required (derived from OpenMAX IL) - Addition of rateControlSupported (missing in OpenMAX AL)”…””}”(hj‹h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KŽhj*h²hubeh}”(h]”Œdesign”ah ]”h"]”Œdesign”ah$]”h&]”uh1hµhh·h²hh³hÊh´KPubh¶)”}”(hhh]”(h»)”}”(hŒ State Machine”h]”hŒ State Machine”…””}”(hj¤h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhj¡h²hh³hÊh´K›ubhÌ)”}”(hŒ?The compressed audio stream state machine is described below ::”h]”hŒ| RUNNING |--------------------------+ | | | | | | | +----------+ +----------+ | | | | ^ | | |compr_free() | | | | | compr_pause() | | compr_resume() | | | | | | | v v | | | +----------+ +----------+ | | | | | | compr_stop() | +--->| FREE | | PAUSE |---------------------------+ | | | | +----------+ +----------+”h]”hXð +----------+ | | | OPEN | | | +----------+ | | | compr_set_params() | v compr_free() +----------+ +------------------------------------| | | | SETUP | | +-------------------------| |<-------------------------+ | | compr_write() +----------+ | | | ^ | | | | compr_drain_notify() | | | | or | | | | compr_stop() | | | | | | | +----------+ | | | | | | | | | DRAIN | | | | | | | | | +----------+ | | | ^ | | | | | | | | compr_drain() | | | | | | v | | | +----------+ +----------+ | | | | compr_start() | | compr_stop() | | | PREPARE |------------------->| RUNNING |--------------------------+ | | | | | | | +----------+ +----------+ | | | | ^ | | |compr_free() | | | | | compr_pause() | | compr_resume() | | | | | | | v v | | | +----------+ +----------+ | | | | | | compr_stop() | +--->| FREE | | PAUSE |---------------------------+ | | | | +----------+ +----------+”…””}”hjÂsbah}”(h]”h ]”h"]”h$]”h&]”Œ xml:space”Œpreserve”uh1jÀh³hÊh´KŸhj¡h²hubeh}”(h]”Œ state-machine”ah ]”h"]”Œ state machine”ah$]”h&]”uh1hµhh·h²hh³hÊh´K›ubh¶)”}”(hhh]”(h»)”}”(hŒGapless Playback”h]”hŒGapless Playback”…””}”(hjÝh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhjÚh²hh³hÊh´KÏubhÌ)”}”(hX When playing thru an album, the decoders have the ability to skip the encoder delay and padding and directly move from one track content to another. The end user can perceive this as gapless playback as we don't have silence while switching from one track to another”h]”hX When playing thru an album, the decoders have the ability to skip the encoder delay and padding and directly move from one track content to another. The end user can perceive this as gapless playback as we don’t have silence while switching from one track to another”…””}”(hjëh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KÐhjÚh²hubhÌ)”}”(hXAlso, there might be low-intensity noises due to encoding. Perfect gapless is difficult to reach with all types of compressed data, but works fine with most music content. The decoder needs to know the encoder delay and encoder padding. So we need to pass this to DSP. This metadata is extracted from ID3/MP4 headers and are not present by default in the bitstream, hence the need for a new interface to pass this information to the DSP. Also DSP and userspace needs to switch from one track to another and start using data for second track.”h]”hXAlso, there might be low-intensity noises due to encoding. Perfect gapless is difficult to reach with all types of compressed data, but works fine with most music content. The decoder needs to know the encoder delay and encoder padding. So we need to pass this to DSP. This metadata is extracted from ID3/MP4 headers and are not present by default in the bitstream, hence the need for a new interface to pass this information to the DSP. Also DSP and userspace needs to switch from one track to another and start using data for second track.”…””}”(hjùh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KÕhjÚh²hubhÌ)”}”(hŒThe main additions are:”h]”hŒThe main additions are:”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KÝhjÚh²hubj)”}”(hhh]”(j•)”}”(hŒ¶set_metadata This routine sets the encoder delay and encoder padding. This can be used by decoder to strip the silence. This needs to be set before the data in the track is written. ”h]”(j›)”}”(hŒ set_metadata”h]”hŒ set_metadata”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jšh³hÊh´Kâhjubj«)”}”(hhh]”hÌ)”}”(hŒ¨This routine sets the encoder delay and encoder padding. This can be used by decoder to strip the silence. This needs to be set before the data in the track is written.”h]”hŒ¨This routine sets the encoder delay and encoder padding. This can be used by decoder to strip the silence. This needs to be set before the data in the track is written.”…””}”(hj-h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Kàhj*ubah}”(h]”h ]”h"]”h$]”h&]”uh1jªhjubeh}”(h]”h ]”h"]”h$]”h&]”uh1j”h³hÊh´Kâhjubj•)”}”(hŒ}set_next_track This routine tells DSP that metadata and write operation sent after this would correspond to subsequent track ”h]”(j›)”}”(hŒset_next_track”h]”hŒset_next_track”…””}”(hjKh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jšh³hÊh´KæhjGubj«)”}”(hhh]”hÌ)”}”(hŒmThis routine tells DSP that metadata and write operation sent after this would correspond to subsequent track”h]”hŒmThis routine tells DSP that metadata and write operation sent after this would correspond to subsequent track”…””}”(hj\h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KåhjYubah}”(h]”h ]”h"]”h$]”h&]”uh1jªhjGubeh}”(h]”h ]”h"]”h$]”h&]”uh1j”h³hÊh´Kæhjh²hubj•)”}”(hŒÉpartial drain This is called when end of file is reached. The userspace can inform DSP that EOF is reached and now DSP can start skipping padding delay. Also next write data would belong to next track ”h]”(j›)”}”(hŒ partial drain”h]”hŒ partial drain”…””}”(hjzh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1jšh³hÊh´Këhjvubj«)”}”(hhh]”hÌ)”}”(hŒºThis is called when end of file is reached. The userspace can inform DSP that EOF is reached and now DSP can start skipping padding delay. Also next write data would belong to next track”h]”hŒºThis is called when end of file is reached. The userspace can inform DSP that EOF is reached and now DSP can start skipping padding delay. Also next write data would belong to next track”…””}”(hj‹h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´Kéhjˆubah}”(h]”h ]”h"]”h$]”h&]”uh1jªhjvubeh}”(h]”h ]”h"]”h$]”h&]”uh1j”h³hÊh´Këhjh²hubeh}”(h]”h ]”h"]”h$]”h&]”uh1jhjÚh²hh³hÊh´NubhÌ)”}”(hXSequence flow for gapless would be: - Open - Get caps / codec caps - Set params - Set metadata of the first track - Fill data of the first track - Trigger start - User-space finished sending all, - Indicate next track data by sending set_next_track - Set metadata of the next track - then call partial_drain to flush most of buffer in DSP - Fill data of the next track - DSP switches to second track”h]”hXSequence flow for gapless would be: - Open - Get caps / codec caps - Set params - Set metadata of the first track - Fill data of the first track - Trigger start - User-space finished sending all, - Indicate next track data by sending set_next_track - Set metadata of the next track - then call partial_drain to flush most of buffer in DSP - Fill data of the next track - DSP switches to second track”…””}”(hj«h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KíhjÚh²hubhÌ)”}”(hŒP(note: order for partial_drain and write for next track can be reversed as well)”h]”hŒP(note: order for partial_drain and write for next track can be reversed as well)”…””}”(hj¹h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´KûhjÚh²hubeh}”(h]”Œgapless-playback”ah ]”h"]”Œgapless playback”ah$]”h&]”uh1hµhh·h²hh³hÊh´KÏubh¶)”}”(hhh]”(h»)”}”(hŒGapless Playback SM”h]”hŒGapless Playback SM”…””}”(hjÒh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhjÏh²hh³hÊh´KþubhÌ)”}”(hŒƒFor Gapless, we move from running state to partial drain and back, along with setting of meta_data and signalling for next track ::”h]”hŒ€For Gapless, we move from running state to partial drain and back, along with setting of meta_data and signalling for next track”…””}”(hjàh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´MhjÏh²hubjÁ)”}”(hXÜ +----------+ compr_drain_notify() | | +------------------------>| RUNNING | | | | | +----------+ | | | | | | compr_next_track() | | | V | +----------+ | compr_set_params() | | | +-----------|NEXT_TRACK| | | | | | | +--+-------+ | | | | | +--------------+ | | | | | compr_partial_drain() | | | V | +----------+ | | | +------------------------ | PARTIAL_ | | DRAIN | +----------+”h]”hXÜ +----------+ compr_drain_notify() | | +------------------------>| RUNNING | | | | | +----------+ | | | | | | compr_next_track() | | | V | +----------+ | compr_set_params() | | | +-----------|NEXT_TRACK| | | | | | | +--+-------+ | | | | | +--------------+ | | | | | compr_partial_drain() | | | V | +----------+ | | | +------------------------ | PARTIAL_ | | DRAIN | +----------+”…””}”hjîsbah}”(h]”h ]”h"]”h$]”h&]”jÐjÑuh1jÀh³hÊh´MhjÏh²hubeh}”(h]”Œgapless-playback-sm”ah ]”h"]”Œgapless playback sm”ah$]”h&]”uh1hµhh·h²hh³hÊh´Kþubh¶)”}”(hhh]”(h»)”}”(hŒ Not supported”h]”hŒ Not supported”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhjh²hh³hÊh´M ubj„)”}”(hhh]”(j‰)”}”(hŒÇSupport for VoIP/circuit-switched calls is not the target of this API. Support for dynamic bit-rate changes would require a tight coupling between the DSP and the host stack, limiting power savings. ”h]”hÌ)”}”(hŒÆSupport for VoIP/circuit-switched calls is not the target of this API. Support for dynamic bit-rate changes would require a tight coupling between the DSP and the host stack, limiting power savings.”h]”hŒÆSupport for VoIP/circuit-switched calls is not the target of this API. Support for dynamic bit-rate changes would require a tight coupling between the DSP and the host stack, limiting power savings.”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´M!hjubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhjh²hh³hÊh´Nubj‰)”}”(hŒÃPacket-loss concealment is not supported. This would require an additional interface to let the decoder synthesize data when frames are lost during transmission. This may be added in the future. ”h]”hÌ)”}”(hŒÂPacket-loss concealment is not supported. This would require an additional interface to let the decoder synthesize data when frames are lost during transmission. This may be added in the future.”h]”hŒÂPacket-loss concealment is not supported. This would require an additional interface to let the decoder synthesize data when frames are lost during transmission. This may be added in the future.”…””}”(hj4h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´M%hj0ubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhjh²hh³hÊh´Nubj‰)”}”(hŒáVolume control/routing is not handled by this API. Devices exposing a compressed data interface will be considered as regular ALSA devices; volume changes and routing information will be provided with regular ALSA kcontrols. ”h]”hÌ)”}”(hŒàVolume control/routing is not handled by this API. Devices exposing a compressed data interface will be considered as regular ALSA devices; volume changes and routing information will be provided with regular ALSA kcontrols.”h]”hŒàVolume control/routing is not handled by this API. Devices exposing a compressed data interface will be considered as regular ALSA devices; volume changes and routing information will be provided with regular ALSA kcontrols.”…””}”(hjLh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´M)hjHubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhjh²hh³hÊh´Nubj‰)”}”(hŒyEmbedded audio effects. Such effects should be enabled in the same manner, no matter if the input was PCM or compressed. ”h]”hÌ)”}”(hŒxEmbedded audio effects. Such effects should be enabled in the same manner, no matter if the input was PCM or compressed.”h]”hŒxEmbedded audio effects. Such effects should be enabled in the same manner, no matter if the input was PCM or compressed.”…””}”(hjdh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´M.hj`ubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhjh²hh³hÊh´Nubj‰)”}”(hŒ8multichannel IEC encoding. Unclear if this is required. ”h]”hÌ)”}”(hŒ7multichannel IEC encoding. Unclear if this is required.”h]”hŒ7multichannel IEC encoding. Unclear if this is required.”…””}”(hj|h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´M1hjxubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhjh²hh³hÊh´Nubj‰)”}”(hŒèEncoding/decoding acceleration is not supported as mentioned above. It is possible to route the output of a decoder to a capture stream, or even implement transcoding capabilities. This routing would be enabled with ALSA kcontrols. ”h]”hÌ)”}”(hŒçEncoding/decoding acceleration is not supported as mentioned above. It is possible to route the output of a decoder to a capture stream, or even implement transcoding capabilities. This routing would be enabled with ALSA kcontrols.”h]”hŒçEncoding/decoding acceleration is not supported as mentioned above. It is possible to route the output of a decoder to a capture stream, or even implement transcoding capabilities. This routing would be enabled with ALSA kcontrols.”…””}”(hj”h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´M3hjubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhjh²hh³hÊh´Nubj‰)”}”(hŒAudio policy/resource management. This API does not provide any hooks to query the utilization of the audio DSP, nor any preemption mechanisms. ”h]”hÌ)”}”(hŒAudio policy/resource management. This API does not provide any hooks to query the utilization of the audio DSP, nor any preemption mechanisms.”h]”hŒAudio policy/resource management. This API does not provide any hooks to query the utilization of the audio DSP, nor any preemption mechanisms.”…””}”(hj¬h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´M8hj¨ubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhjh²hh³hÊh´Nubj‰)”}”(hŒçNo notion of underrun/overrun. Since the bytes written are compressed in nature and data written/read doesn't translate directly to rendered output in time, this does not deal with underrun/overrun and maybe dealt in user-library ”h]”hÌ)”}”(hŒåNo notion of underrun/overrun. Since the bytes written are compressed in nature and data written/read doesn't translate directly to rendered output in time, this does not deal with underrun/overrun and maybe dealt in user-library”h]”hŒçNo notion of underrun/overrun. Since the bytes written are compressed in nature and data written/read doesn’t translate directly to rendered output in time, this does not deal with underrun/overrun and maybe dealt in user-library”…””}”(hjÄh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´M<hjÀubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhjh²hh³hÊh´Nubeh}”(h]”h ]”h"]”h$]”h&]”j j!uh1jƒh³hÊh´M!hjh²hubeh}”(h]”Œ not-supported”ah ]”h"]”Œ not supported”ah$]”h&]”uh1hµhh·h²hh³hÊh´M ubh¶)”}”(hhh]”(h»)”}”(hŒCredits”h]”hŒCredits”…””}”(hjéh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hºhjæh²hh³hÊh´MCubj„)”}”(hhh]”(j‰)”}”(hŒEMark Brown and Liam Girdwood for discussions on the need for this API”h]”hÌ)”}”(hjüh]”hŒEMark Brown and Liam Girdwood for discussions on the need for this API”…””}”(hjþh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´MDhjúubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj÷h²hh³hÊh´Nubj‰)”}”(hŒ5Harsha Priya for her work on intel_sst compressed API”h]”hÌ)”}”(hjh]”hŒ5Harsha Priya for her work on intel_sst compressed API”…””}”(hjh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´MEhjubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj÷h²hh³hÊh´Nubj‰)”}”(hŒ$Rakesh Ughreja for valuable feedback”h]”hÌ)”}”(hj*h]”hŒ$Rakesh Ughreja for valuable feedback”…””}”(hj,h²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´MFhj(ubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj÷h²hh³hÊh´Nubj‰)”}”(hŒ‰Sing Nallasellan, Sikkandar Madar and Prasanna Samaga for demonstrating and quantifying the benefits of audio offload on a real platform.”h]”hÌ)”}”(hŒ‰Sing Nallasellan, Sikkandar Madar and Prasanna Samaga for demonstrating and quantifying the benefits of audio offload on a real platform.”h]”hŒ‰Sing Nallasellan, Sikkandar Madar and Prasanna Samaga for demonstrating and quantifying the benefits of audio offload on a real platform.”…””}”(hjCh²hh³Nh´Nubah}”(h]”h ]”h"]”h$]”h&]”uh1hËh³hÊh´MGhj?ubah}”(h]”h ]”h"]”h$]”h&]”uh1jˆhj÷h²hh³hÊh´Nubeh}”(h]”h ]”h"]”h$]”h&]”j j!uh1jƒh³hÊh´MDhjæh²hubeh}”(h]”Œcredits”ah ]”h"]”Œcredits”ah$]”h&]”uh1hµhh·h²hh³hÊh´MCubeh}”(h]”Œalsa-compress-offload-api”ah ]”h"]”Œalsa compress-offload api”ah$]”h&]”uh1hµhhh²hh³hÊh´Kubeh}”(h]”h ]”h"]”h$]”h&]”Œsource”hÊuh1hŒcurrent_source”NŒ current_line”NŒsettings”Œdocutils.frontend”ŒValues”“”)”}”(hºNŒ generator”NŒ datestamp”NŒ source_link”NŒ source_url”NŒ toc_backlinks”Œentry”Œfootnote_backlinks”KŒ sectnum_xform”KŒstrip_comments”NŒstrip_elements_with_classes”NŒ strip_classes”NŒ report_level”KŒ halt_level”KŒexit_status_level”KŒdebug”NŒwarning_stream”NŒ traceback”ˆŒinput_encoding”Œ utf-8-sig”Œinput_encoding_error_handler”Œstrict”Œoutput_encoding”Œutf-8”Œoutput_encoding_error_handler”jŒerror_encoding”Œutf-8”Œerror_encoding_error_handler”Œbackslashreplace”Œ language_code”Œen”Œrecord_dependencies”NŒconfig”NŒ id_prefix”hŒauto_id_prefix”Œid”Œ dump_settings”NŒdump_internals”NŒdump_transforms”NŒdump_pseudo_xml”NŒexpose_internals”NŒstrict_visitor”NŒ_disable_config”NŒ_source”hÊŒ _destination”NŒ _config_files”]”Œ7/var/lib/git/docbuild/linux/Documentation/docutils.conf”aŒfile_insertion_enabled”ˆŒ raw_enabled”KŒline_length_limit”M'Œpep_references”NŒ pep_base_url”Œhttps://peps.python.org/”Œpep_file_url_template”Œpep-%04d”Œrfc_references”NŒ rfc_base_url”Œ&https://datatracker.ietf.org/doc/html/”Œ tab_width”KŒtrim_footnote_reference_space”‰Œsyntax_highlight”Œlong”Œ smart_quotes”ˆŒsmartquotes_locales”]”Œcharacter_level_inline_markup”‰Œdoctitle_xform”‰Œ docinfo_xform”KŒsectsubtitle_xform”‰Œ image_loading”Œlink”Œembed_stylesheet”‰Œcloak_email_addresses”ˆŒsection_self_link”‰Œenv”NubŒreporter”NŒindirect_targets”]”Œsubstitution_defs”}”Œsubstitution_names”}”Œrefnames”}”Œrefids”}”Œnameids”}”(jjjgjaj^j'j$jžj›j×jÔjÌjÉjjþjãjàjbj_uŒ nametypes”}”(jj‰ja‰j'‰jž‰j׉j̉j‰jã‰jb‰uh}”(jgh·j^jj$jdj›j*jÔj¡jÉjÚjþjÏjàjj_jæuŒ footnote_refs”}”Œ citation_refs”}”Œ autofootnotes”]”Œautofootnote_refs”]”Œsymbol_footnotes”]”Œsymbol_footnote_refs”]”Œ footnotes”]”Œ citations”]”Œautofootnote_start”KŒsymbol_footnote_start”KŒ id_counter”Œ collections”ŒCounter”“”}”…”R”Œparse_messages”]”Œtransform_messages”]”Œ transformer”NŒ include_log”]”Œ decoration”Nh²hub.