Wwise SDK 2022.1.17
|
Effect plug-ins apply DSP algorithms to existing sounds fed as input audio data. Writing effect plug-ins consists of implementing one of the AK::IAkInPlaceEffectPlugin or AK::IAkOutOfPlaceEffectPlugin interfaces. Only the functions specific to these interface are covered here. Refer to 사운드 엔진 플러그인 만들기, for information about interface components shared with other plug-in types (AK::IAkPlugin interface). Also refer to the provided AkDelay plug-in for details (예제).
This method prepares the effect plug-in for data processing, allocates memory, and sets up initial conditions.
The plug-in is passed in a pointer to a memory allocator interface (AK::IAkPluginMemAlloc
). We recommend that you allocate all dynamic memory through this interface using the provided memory allocation macros (refer to 오디오 플러그인의 메모리 할당 및 할당 해제하기) so that Wwise can track the memory, and so that any underlying memory hooks in the game engine can use it. For the most common memory allocation needs, specifically allocation at initialization and release at termination, the plug-in does not need to retain a pointer to the allocator. It is also provided to the plug-in on termination.
The AK::IAkEffectPluginContext
interface can access the global context through AK::IAkPluginContextBase::GlobalContext()
.
The plug-in also receives a pointer to its associated parameter node interface (AK::IAkPluginParam
). Most plug-ins keep a reference to the associated parameter node to be able to retrieve parameters at runtime. Refer to 매개 변수 노드와 플러그인의 커뮤니케이션 for more details.
All of these interfaces remain valid throughout the plug-in's lifespan so it is safe to keep an internal reference to them when necessary.
Effect plug-ins also receive the input/output audio format (which stays the same during the lifespan of the plug-in) to be able to allocate memory and set up processing for a given channel configuration.
참고: AK::IAkEffectPlugin::Init() is called every time the effect is instantiated, which happens when a voice starts playing or a mixing bus is instantiated. Because other sounds are typically playing already, this must occur within a reasonable amount of time. If you need to initialize large common/global data structures, do so when registering the plug-in library. See 플러그인에서 전역적 사운드 엔진 콜백 사용하기 for more details. |
Effect plug-ins may choose to implement one of 2 interfaces: AK::IAkInPlaceEffectPlugin or AK::IAkOutOfPlaceEffectPlugin. In general in-place effects (which use the same audio buffer for both input and output data), should be used for most effects. However, when there is a change in data flow (e.g. time-stretching effect), the out-of-place interface must be implemented instead.
경고: Out-of-place effects that have different input/output channel configurations can be inserted in the Master-Mixer hierarchy. However, it is not possible to put rate-changing effects on mixing busses. Effects that have different input/output buffer lengths can only be inserted in the Actor-Mixer hierarchy (as source effects). |
This method executes the plug-in's signal processing algorithm in-place on a given audio buffer (refer to AkAudioBuffer 구조체를 이용해 데이터에 액세스하기 for more information). This structure provides information to the plug-in regarding how many input samples are valid (AkAudioBuffer::uValidFrames) and the maximum number of audio sample frames the buffer can accommodate (AkAudioBuffer::MaxFrames() method). The AkAudioBuffer::eState structure member signals the plug-in whether it is the last execution (AK_NoMoreData) or not (AK_DataReady).
AK::IAkInPlaceEffectPlugin::TimeSkip() is substituted to Execute() when a virtual voice is playing from elapsed time to allow plug-ins to keep updating their internal state if desired.
This method executes the plug-in's signal processing for out-of-place algorithms. Two AkAudioBuffer structure are used, one for the input buffer and one for the output buffer. The pipeline uses the eState in the output audio buffer to determine the current state of the effect. The effect should only return when it has either consumed all of the input buffer (and return AK_DataNeeded to carry on with execution with more data later) or output AK_DataReady when a full output buffer has been filled. Effect tails can also be implemented in out-of-place effect by changing the AK_NoMoreData received by AK_DataReady until the effect has finished flushing its internal state (at which point AK_NoMoreData should be returned).
The input buffer is not released by the pipeline until it is entirely consumed. Therefore, it is important to use the in_uInOffset offset parameter to start reading the data where the last Execute() call left off. The sample below provides an example of how to achieve this.
AK::IAkOutOfPlaceEffectPlugin::TimeSkip() is substituted to Execute() when a virtual voice is playing from elapsed time to allow plug-ins to keep updating their internal state if desired. This function is then responsible to tell the pipeline how many input samples would normally be consumed to produce a given number of output frames.
Some effects have an internal state that must be output after the input is finished playing to decay out properly (most notably effects with delay lines). The effect API makes it possible to continue the execution even without any valid input data. When the eState flag of the AkAudioBuffer structure becomes AK_NoMoreData, the pipeline will no longer feed valid incoming sample frames to the plug-in after the current execution. The plug-in is then free to write new (subsequent) frames in the buffer (up to the value returned by MaxFrames()) to allow emptying delay lines after the input signal is finished. The audio pipeline should always be told how many frames have been output by properly updating the uValidFrames field. If the plug-in Execute() function needs to be called again to finish flushing the effect tail, the eState member should be set to AK_DataReady. The pipeline will stop calling the plug-in Execute() only when the effect has set AK_NoMoreData for the eState field.
The easiest way to handle tails is to use the AkFXTailHandler service class provided in the SDK. With an instance of the AkFXTailHandler held as a class member for your plug-in, all you need to do within an in-place effect is to call AkFXTailHandler::HandleTail() and pass it the AkAudioBuffer and the total number of audio samples to output once the output will be finished (which may change from one execution to the other based on a parameter). Refer to the AkDelay plugin source code for details (예제).
참고: Important Notes for Executing Effect Plug-ins
|
Plug-ins can be bypassed in Wwise or in-game through various mechanisms, including UI, events, and RTPCs. In such cases, the plug-in Execute() routine is not called. When the plug-in resumes on unbypass and the execute function is called again, the plug-in will restart its processing. The Reset() function of the plug-in is called on bypass so that its delay line and other state information can be cleared for a fresh new start when the effect is finally unbypassed. Refer to AK::IAkPlugin::Reset() for more information.
경고: Bypassing and unbypassing plug-ins at runtime may result in signal discontinuities depending on the plug-in and on the sound material being processed. |
더 자세한 정보는 다음 섹션을 참조하세요.
Monitoring is the process of observing the state of an audio plug-in during its playback.
When the Authoring application is connected through the communication module, Sound Engine plug-in instances have the opportunity to provide their state as an arbitrary buffer of data that is passed to the Authoring side of the plug-in. This data could be performance metrics computed from the execution process or signal levels at different processing stages for metering purposes.
This section covers the Sound Engine side of monitoring. Refer to the Monitoring in Authoring for details on the receiving and deserialization process.
Sending Monitoring Data
Sending is done by the Sound Engine part of the plug-in. Here are the steps of this process:
long
varies across 64-bit platforms where some use 4 bytes (LLP64) and others 8 bytes (LP64). Also, be aware that platforms with different alignment requirements may lead to different struct
layouts due to the packing strategy. You may specify a custom packing alignment by using the pack
pragma.AK::IAkPluginContextBase::PostMonitorData()
provided by its plug-in context instance to send the data through the profiler communication channel. This function should be called inside the Execute
function.참고: Because monitoring data is sent asynchronously, AK::IAkPluginContextBase::PostMonitorData() copies the buffer the Sound Engine plug-in provides. This means that the buffer can safely be allocated on the stack, or otherwise freed after PostMonitorData has been called. |
It is not always possible to send monitoring data: only when the Wwise Authoring application is profiling can the monitoring data be used. Preparing and sending this data in cases when the Authoring is not actively profiling or when the game simply cannot be profiled is wasteful. There are two ways to avoid this cost when not necessary:
AK::IAkPluginContextBase::CanPostMonitorData()
that will return false when profiling is not active or in any cases where posting monitor data is not supported. Always check whether monitoring is supported using this function prior to calling AK::IAkPluginContextBase::PostMonitorData()
.AK_OPTIMIZED
. AK_OPTIMIZED
is defined by default for release builds when using the Wwise Plug-in development tool wp.py
.The following code example shows how to serialize and send monitoring data:
Object processors are a derivative of audio plug-ins that explicitly handle in their Execute
function an array of distinct audio signals, each belonging to individual objects. Rather than a single AkAudioBuffer
instance, Execute
receives an Ak3DAudioObjects
wrapper object that owns a list of AkAudioBuffer
instances, one per object.
Because a single object processor plug-in processes an array of audio buffers, monitoring data this plug-in sends must encompass an aggregation of the monitoring data related to all the objects it is processing. The simplest approach is to send the number of objects and serialize the monitoring data related to each signal as an array.
For more on object processors, refer to Creating Sound Engine Object Processor Plug-ins.
프로젝트를 등록하세요. 아무런 조건이나 의무 사항 없이 빠른 시작을 도와드리겠습니다.
Wwise를 시작해 보세요