버전

menu_open
Wwise SDK 2024.1.0
Creating Sound Engine Effect Plug-ins

Implementing an Effect Plug-in Interface

Effect plug-ins apply DSP algorithms to existing sounds fed as input audio data. Writing effect plug-ins consists of implementing one of the AK::IAkInPlaceEffectPlugin or AK::IAkOutOfPlaceEffectPlugin interfaces. Only the functions specific to these interface are covered here. Refer to 사운드 엔진 플러그인 만들기, for information about interface components shared with other plug-in types (AK::IAkPlugin interface). Also refer to the provided AkDelay plug-in for details (예제).

AK::IAkEffectPlugin::Init()

This method prepares the effect plug-in for data processing, allocates memory, and sets up initial conditions.

The plug-in is passed in a pointer to a memory allocator interface (AK::IAkPluginMemAlloc). We recommend that you allocate all dynamic memory through this interface using the provided memory allocation macros (refer to 오디오 플러그인의 메모리 할당 및 할당 해제하기) so that Wwise can track the memory, and so that any underlying memory hooks in the game engine can use it. For the most common memory allocation needs, specifically allocation at initialization and release at termination, the plug-in does not need to retain a pointer to the allocator. It is also provided to the plug-in on termination.

The AK::IAkEffectPluginContext interface can access the global context through AK::IAkPluginContextBase::GlobalContext().

The plug-in also receives a pointer to its associated parameter node interface (AK::IAkPluginParam). Most plug-ins keep a reference to the associated parameter node to be able to retrieve parameters at runtime. Refer to 매개 변수 노드와 플러그인의 커뮤니케이션 for more details.

All of these interfaces remain valid throughout the plug-in's lifespan so it is safe to keep an internal reference to them when necessary.

Effect plug-ins also receive the input/output audio format (which stays the same during the lifespan of the plug-in) to be able to allocate memory and set up processing for a given channel configuration.

참고: AK::IAkEffectPlugin::Init() is called every time the effect is instantiated, which happens when a voice starts playing or a mixing bus is instantiated. Because other sounds are typically playing already, this must occur within a reasonable amount of time. If you need to initialize large common/global data structures, do so when registering the plug-in library. See 플러그인에서 전역적 사운드 엔진 콜백 사용하기 for more details.

AK::IAkPluginEffect::Execute()

Effect plug-ins may choose to implement one of 2 interfaces: AK::IAkInPlaceEffectPlugin or AK::IAkOutOfPlaceEffectPlugin. In general in-place effects (which use the same audio buffer for both input and output data), should be used for most effects. However, when there is a change in data flow (e.g. time-stretching effect), the out-of-place interface must be implemented instead.

경고: Out-of-place effects that have different input/output channel configurations can be inserted in the Master-Mixer hierarchy. However, it is not possible to put rate-changing effects on mixing busses. Effects that have different input/output buffer lengths can only be inserted in the Actor-Mixer hierarchy (as source effects).

IAkInPlaceEffectPlugin::Execute

This method executes the plug-in's signal processing algorithm in-place on a given audio buffer (refer to AkAudioBuffer 구조체를 이용해 데이터에 액세스하기 for more information). This structure provides information to the plug-in regarding how many input samples are valid (AkAudioBuffer::uValidFrames) and the maximum number of audio sample frames the buffer can accommodate (AkAudioBuffer::MaxFrames() method). The AkAudioBuffer::eState structure member signals the plug-in whether it is the last execution (AK_NoMoreData) or not (AK_DataReady).

AK::IAkInPlaceEffectPlugin::TimeSkip() is substituted to Execute() when a virtual voice is playing from elapsed time to allow plug-ins to keep updating their internal state if desired.

IAkOutOfPlaceEffectPlugin::Execute

This method executes the plug-in's signal processing for out-of-place algorithms. Two AkAudioBuffer structure are used, one for the input buffer and one for the output buffer. The pipeline uses the eState in the output audio buffer to determine the current state of the effect. The effect should only return when it has either consumed all of the input buffer (and return AK_DataNeeded to carry on with execution with more data later) or output AK_DataReady when a full output buffer has been filled. Effect tails can also be implemented in out-of-place effect by changing the AK_NoMoreData received by AK_DataReady until the effect has finished flushing its internal state (at which point AK_NoMoreData should be returned).

The input buffer is not released by the pipeline until it is entirely consumed. Therefore, it is important to use the in_uInOffset offset parameter to start reading the data where the last Execute() call left off. The sample below provides an example of how to achieve this.

void CAkSimpleUpsampler::Execute(
AkAudioBuffer * io_pInBuffer,
AkUInt32 in_uInOffset,
AkAudioBuffer * io_pOutBuffer )
{
assert( io_pInBuffer->NumChannels() == io_pOutBuffer->NumChannels() );
const AkUInt32 uNumChannels = io_pInBuffer->NumChannels();
AkUInt32 uFramesConsumed; // Track how much data is consumed from input buffer
AkUInt32 uFramesProduced; // Track how much data is produced to output buffer
for ( AkUInt32 i = 0; i < uNumChannels; i++ )
{
AkReal32 * AK_RESTRICT pInBuf = (AkReal32 * AK_RESTRICT) io_pInBuffer->GetChannel( i ) + in_uInOffset;
AkReal32 * AK_RESTRICT pfOutBuf = (AkReal32 * AK_RESTRICT) io_pOutBuffer->GetChannel( i ) + io_pOutBuffer->uValidFrames;
uFramesConsumed = 0; // Reset for every channel
uFramesProduced = 0;
while ( (uFramesConsumed < io_pInBuffer->uValidFrames) && (uFramesProduced < io_pOutBuffer->MaxFrames()) )
{
// Do some processing that consumes input and produces output at different rate (e.g. time-stretch or resampling)
*pfOutBuf++ = *pInBuf;
*pfOutBuf++ = *pInBuf++;
uFramesConsumed++;
uFramesProduced += 2;
}
}
// Update AkAudioBuffer structure to continue processing
io_pInBuffer->uValidFrames -= uFramesConsumed;
io_pOutBuffer->uValidFrames += uFramesProduced;
if ( io_pInBuffer->eState == AK_NoMoreData && io_pInBuffer->uValidFrames == 0 )
io_pOutBuffer->eState = AK_NoMoreData; // Input entirely consumed and nothing more to output, the effect is done
else if ( io_pOutBuffer->uValidFrames == io_pOutBuffer->MaxFrames() )
io_pOutBuffer->eState = AK_DataReady; // A complete audio buffer is ready
else
io_pOutBuffer->eState = AK_DataNeeded; // We need more data to continue processing
}

AK::IAkOutOfPlaceEffectPlugin::TimeSkip() is substituted to Execute() when a virtual voice is playing from elapsed time to allow plug-ins to keep updating their internal state if desired. This function is then responsible to tell the pipeline how many input samples would normally be consumed to produce a given number of output frames.

Implementing Effect Plug-in Tails

Some effects have an internal state that must be output after the input is finished playing to decay out properly (most notably effects with delay lines). The effect API makes it possible to continue the execution even without any valid input data. When the eState flag of the AkAudioBuffer structure becomes AK_NoMoreData, the pipeline will no longer feed valid incoming sample frames to the plug-in after the current execution. The plug-in is then free to write new (subsequent) frames in the buffer (up to the value returned by MaxFrames()) to allow emptying delay lines after the input signal is finished. The audio pipeline should always be told how many frames have been output by properly updating the uValidFrames field. If the plug-in Execute() function needs to be called again to finish flushing the effect tail, the eState member should be set to AK_DataReady. The pipeline will stop calling the plug-in Execute() only when the effect has set AK_NoMoreData for the eState field.

The easiest way to handle tails is to use the AkFXTailHandler service class provided in the SDK. With an instance of the AkFXTailHandler held as a class member for your plug-in, all you need to do within an in-place effect is to call AkFXTailHandler::HandleTail() and pass it the AkAudioBuffer and the total number of audio samples to output once the output will be finished (which may change from one execution to the other based on a parameter). Refer to the AkDelay plugin source code for details (예제).

참고: Important Notes for Executing Effect Plug-ins

  • When a plug-in is used in Wwise, parameter changes are sent down to the parameter node whether or not the parameter supports RTPCs. This allows the plug-in to support runtime value changes of non-RTPC values if desired for Wwise usage. If you do not want your plug-in to support this, you should make a copy of the parameter values at initialization time to ensure they remain the same throughout the plug-in's duration.
  • A plug-in should handle several channel configurations (at least mono, stereo and 5.1 if it can be inserted on busses) or return AK_UnsupportedChannelConfig at inialization time.

Bypass

Plug-ins can be bypassed in Wwise or in-game through various mechanisms, including UI, events, and RTPCs. In such cases, the plug-in Execute() routine is not called. When the plug-in resumes on unbypass and the execute function is called again, the plug-in will restart its processing. The Reset() function of the plug-in is called on bypass so that its delay line and other state information can be cleared for a fresh new start when the effect is finally unbypassed. Refer to AK::IAkPlugin::Reset() for more information.

경고: Bypassing and unbypassing plug-ins at runtime may result in signal discontinuities depending on the plug-in and on the sound material being processed.

더 자세한 정보는 다음 섹션을 참조하세요.

Monitoring in the Sound Engine

Monitoring is the process of observing the state of an audio plug-in during its playback.

When the Authoring application is connected through the communication module, Sound Engine plug-in instances have the opportunity to provide their state as an arbitrary buffer of data that is passed to the Authoring side of the plug-in. This data could be performance metrics computed from the execution process or signal levels at different processing stages for metering purposes.

This section covers the Sound Engine side of monitoring. Refer to the Monitoring in Authoring for details on the receiving and deserialization process.

Sending Monitoring Data

Sending is done by the Sound Engine part of the plug-in. Here are the steps of this process:

  • Serialization: Because the monitoring data is provided as a buffer of arbitrary size, the Sound Engine part of the plug-in must serialize the data it wishes to send and its Authoring side counterpart must deserialize it upon reception. Serialization is left to the plug-in maker to implement and the data remains opaque to Wwise. Make sure to use types that have a standard size across platforms, e.g., the size of long varies across 64-bit platforms where some use 4 bytes (LLP64) and others 8 bytes (LP64). Also, be aware that platforms with different alignment requirements may lead to different struct layouts due to the packing strategy. You may specify a custom packing alignment by using the pack pragma.
  • Sending: Once the monitoring data is serialized into a buffer, the Sound Engine plug-in uses the function AK::IAkPluginContextBase::PostMonitorData() provided by its plug-in context instance to send the data through the profiler communication channel. This function should be called inside the Execute function.
참고: Because monitoring data is sent asynchronously, AK::IAkPluginContextBase::PostMonitorData() copies the buffer the Sound Engine plug-in provides. This means that the buffer can safely be allocated on the stack, or otherwise freed after PostMonitorData has been called.

It is not always possible to send monitoring data: only when the Wwise Authoring application is profiling can the monitoring data be used. Preparing and sending this data in cases when the Authoring is not actively profiling or when the game simply cannot be profiled is wasteful. There are two ways to avoid this cost when not necessary:

  • Runtime check: The plug-in context provides the function AK::IAkPluginContextBase::CanPostMonitorData() that will return false when profiling is not active or in any cases where posting monitor data is not supported. Always check whether monitoring is supported using this function prior to calling AK::IAkPluginContextBase::PostMonitorData().
  • Compile-time check: In release builds, the communication module is typically not initialized, so profiling is not possible. You may remove from compilation the section of code preparing and sending the monitoring data by using a preprocessor check on AK_OPTIMIZED. AK_OPTIMIZED is defined by default for release builds when using the Wwise Plug-in development tool wp.py.

The following code example shows how to serialize and send monitoring data:

void MyPlugin::Execute( AkAudioBuffer * io_pBuffer )
{
const AkUInt32 uNumChannels = io_pBuffer->NumChannels();
// Algorithm tracks signal peaks for all uNumChannels channels inside the following array
float fChannelPeaks[MAX_NUM_CHANNELS];
...
#ifndef AK_OPTIMIZED // Compile-time check
if ( m_pContext->CanPostMonitorData() ) // Runtime check, m_pContext is cached from MyPlugin::Init()
{
// == Serialization
// Compute the buffer size and allocate a buffer
unsigned int uMonitorDataSize = sizeof(AkUInt32) * uNumChannels*sizeof(float);
char * pMonitorData = (char *) AkAlloca( uMonitorDataSize );
// Fill the monitoring data buffer
*((AkUInt32 *) pMonitorData ) = uNumChannels;
memcpy( pMonitorData + sizeof(AkUInt32), fChannelPeaks, uNumChannels*sizeof(float) );
// == Sending
m_pContext->PostMonitorData( pMonitorData, uMonitorDataSize );
// If the buffer was allocated using \c AK::IAkPluginMemAlloc, free it here
// When using AkAlloca, nothing to do: the memory is freed automatically when the function returns
}
#endif
...
}

Object processors are a derivative of audio plug-ins that explicitly handle in their Execute function an array of distinct audio signals, each belonging to individual objects. Rather than a single AkAudioBuffer instance, Execute receives an Ak3DAudioObjects wrapper object that owns a list of AkAudioBuffer instances, one per object.

Because a single object processor plug-in processes an array of audio buffers, monitoring data this plug-in sends must encompass an aggregation of the monitoring data related to all the objects it is processing. The simplest approach is to send the number of objects and serialize the monitoring data related to each signal as an array.

For more on object processors, refer to Creating Sound Engine Object Processor Plug-ins.

AkSampleType * GetChannel(AkUInt32 in_uIndex)
Definition: AkCommonDefs.h:432
@ AK_DataReady
The provider has available data.
Definition: AkTypes.h:162
AkForceInline AkUInt32 NumChannels() const
Get the number of channels.
Definition: AkCommonDefs.h:348
@ AK_NoMoreData
No more data is available from the source.
Definition: AkTypes.h:147
float AkReal32
32-bit floating point
AKRESULT eState
Execution status
Definition: AkCommonDefs.h:507
AkUInt16 uValidFrames
Number of valid sample frames in the audio buffer
Definition: AkCommonDefs.h:513
#define AkAlloca(_size_)
Stack allocations.
@ AK_DataNeeded
The consumer needs more.
Definition: AkTypes.h:160
uint32_t AkUInt32
Unsigned 32-bit integer
#define AK_RESTRICT
Refers to the __restrict compilation flag available on some platforms
Definition: AkTypes.h:45
AkForceInline AkUInt16 MaxFrames() const
Definition: AkCommonDefs.h:500

이 페이지가 도움이 되었나요?

지원이 필요하신가요?

질문이 있으신가요? 문제를 겪고 계신가요? 더 많은 정보가 필요하신가요? 저희에게 문의해주시면 도와드리겠습니다!

지원 페이지를 방문해 주세요

작업하는 프로젝트에 대해 알려주세요. 언제든지 도와드릴 준비가 되어 있습니다.

프로젝트를 등록하세요. 아무런 조건이나 의무 사항 없이 빠른 시작을 도와드리겠습니다.

Wwise를 시작해 보세요