Effect plug-ins apply DSP algorithms to existing sounds fed as input audio data. Writing effect plug-ins consists of implementing one of the AK::IAkInPlaceEffectPlugin or AK::IAkOutOfPlaceEffectPlugin interfaces. Only the functions specific to these interface are covered here. Refer to How to Create Wwise Sound Engine Plug-ins, for information about interface components shared with other plug-in types (AK::IAkPlugin interface). Also refer to the provided AkDelay plug-in for details (Samples).
This method prepares the effect plug-in for data processing, allocates memory, and sets up initial conditions.
The plug-in is passed in a pointer to a memory allocator interface (AK::IAkPluginMemAlloc). You should perform all dynamic memory allocation through this interface using the provided memory allocation macros (refer to Allocating/De-allocating Memory in Audio Plug-ins). For the most common memory allocation needs, namely allocation at initialization and release at termination, the plug-in does not need to retain a pointer to the allocator. It will also be provided to the plug-in on termination.
The AK::IAkEffectPluginContext interface allows to retrieve information such as the bypass state or other information related to the context in which the effect plug-in is operated. It may also access the global context through AK::IAkPluginContextBase::GlobalContext().
The plug-in also receives a pointer to its associated parameter node interface (AK::IAkPluginParam). Most plug-ins will want to keep a reference to the associated parameter node to be able to retrieve parameters at runtime. Refer to Communication Between Parameter Nodes and Plug-ins. for more details.
All of these interfaces will remain valid throughout the plug-in's lifespan so it is safe to keep an internal reference to them when necessary.
Effect plug-ins also receive the input/output audio format (which stays the same during the lifespan of the plug-in) to be able to allocate memory and setup processing for a given channel configuration.
|
Note: AK::IAkEffectPlugin::Init() is called every time the effect is instantiated, which happens when a voice starts playing or a mixing bus is instantiated. Since other sounds will typically be already playing, this needs to occur within a reasonable amount of time. If you need to initialize large common/global data structures, then maybe you should do so when registering the plugin library. See Using Global Sound Engine Callbacks From Plugins for more details. |
Effect plug-ins may choose to implement one of 2 interfaces: AK::IAkInPlaceEffectPlugin or AK::IAkOutOfPlaceEffectPlugin. In general in-place effects (which use the same audio buffer for both input and output data), should be used for most effects. However, when there is a change in data flow (e.g. time-stretching effect), the out-of-place interface must be implemented instead.
|
Caution: Out-of-place effects that have different input/output channel configurations can be inserted in the Master-Mixer hierarchy. However, it is not possible to put rate-changing effects on mixing busses. Effects that have different input/output buffer lengths can only be inserted in the Actor-Mixer hierarchy (as source effects). |
This method executes the plug-in's signal processing algorithm in-place on a given audio buffer (refer to Accessing Data Using AkAudioBuffer Structure for more information). This structure provides information to the plug-in regarding how many input samples are valid (AkAudioBuffer::uValidFrames) and the maximum number of audio sample frames the buffer can accommodate (AkAudioBuffer::MaxFrames() method). The AkAudioBuffer::eState structure member signals the plug-in whether it is the last execution (AK_NoMoreData) or not (AK_DataReady).
AK::IAkInPlaceEffectPlugin::TimeSkip() is substituted to Execute() when a virtual voice is playing from elapsed time to allow plug-ins to keep updating their internal state if desired.
This method executes the plug-in's signal processing for out-of-place algorithms. Two AkAudioBuffer structure are used, one for the input buffer and one for the output buffer. The pipeline uses the eState in the output audio buffer to determine the current state of the effect. The effect should only return when it has either consumed all of the input buffer (and return AK_DataNeeded to carry on with execution with more data later) or output AK_DataReady when a full output buffer has been filled. Effect tails can also be implemented in out-of-place effect by changing the AK_NoMoreData received by AK_DataReady until the effect has finished flushing its internal state (at which point AK_NoMoreData should be returned).
The input buffer is not released by the pipeline until it is entirely consumed. Therefore, it is important to use the in_uInOffset offset parameter to start reading the data where the last Execute() call left off. The sample below provides an example of how to achieve this.
void CAkSimpleUpsampler::Execute( AkAudioBuffer * io_pInBuffer, AkUInt32 in_uInOffset, AkAudioBuffer * io_pOutBuffer ) { assert( io_pInBuffer->NumChannels() == io_pOutBuffer->NumChannels() ); const AkUInt32 uNumChannels = io_pInBuffer->NumChannels(); AkUInt32 uFramesConsumed; // Track how much data is consumed from input buffer AkUInt32 uFramesProduced; // Track how much data is produced to output buffer for ( AkUInt32 i = 0; i < uNumChannels; i++ ) { AkReal32 * AK_RESTRICT pInBuf = (AkReal32 * AK_RESTRICT) io_pInBuffer->GetChannel( i ) + in_uInOffset; AkReal32 * AK_RESTRICT pfOutBuf = (AkReal32 * AK_RESTRICT) io_pOutBuffer->GetChannel( i ) + io_pOutBuffer->uValidFrames; uFramesConsumed = 0; // Reset for every channel uFramesProduced = 0; while ( (uFramesConsumed < io_pInBuffer->uValidFrames) && (uFramesProduced < io_pOutBuffer->MaxFrames()) ) { // Do some processing that consumes input and produces output at different rate (e.g. time-stretch or resampling) *pfOutBuf++ = *pInBuf; *pfOutBuf++ = *pInBuf++; uFramesConsumed++; uFramesProduced += 2; } } // Update AkAudioBuffer structure to continue processing io_pInBuffer->uValidFrames -= uFramesConsumed; io_pOutBuffer->uValidFrames += uFramesProduced; if ( io_pInBuffer->eState == AK_NoMoreData && io_pInBuffer->uValidFrames == 0 ) io_pOutBuffer->eState = AK_NoMoreData; // Input entirely consumed and nothing more to output, the effect is done else if ( io_pOutBuffer->uValidFrames == io_pOutBuffer->MaxFrames() ) io_pOutBuffer->eState = AK_DataReady; // A complete audio buffer is ready else io_pOutBuffer->eState = AK_DataNeeded; // We need more data to continue processing }
AK::IAkOutOfPlaceEffectPlugin::TimeSkip() is substituted to Execute() when a virtual voice is playing from elapsed time to allow plug-ins to keep updating their internal state if desired. This function is then responsible to tell the pipeline how many input samples would normally be consumed to produce a given number of output frames.
Some effects have an internal state that must be output after the input is finished playing to decay out properly (most notably effects with delay lines). The effect API makes it possible to continue the execution even without any valid input data. When the eState flag of the AkAudioBuffer structure becomes AK_NoMoreData, the pipeline will no longer feed valid incoming sample frames to the plug-in after the current execution. The plug-in is then free to write new (subsequent) frames in the buffer (up to the value returned by MaxFrames()) to allow emptying delay lines after the input signal is finished. The audio pipeline should always be told how many frames have been output by properly updating the uValidFrames field. If the plug-in Execute() function needs to be called again to finish flushing the effect tail, the eState member should be set to AK_DataReady. The pipeline will stop calling the plug-in Execute() only when the effect has set AK_NoMoreData for the eState field.
The easiest way to handle tails is to use the AkFXTailHandler service class provided in the SDK. With an instance of the AkFXTailHandler held as a class member for your plug-in, all ou need to do within an in-place effect is to call AkFXTailHandler::HandleTail() and pass it the AkAudioBuffer and the total number of audio samples to output once the output will be finished (which may change from one execution to the other based on a parameter). Refer to the AkDelay plugin source code for details (Samples).
Plug-ins can be bypassed in Wwise or in-game through various mechanisms, including UI, events, and RTPCs. In such cases, the plug-in Execute() routine is not called. When the plug-in resumes on unbypass and the execute function is called again, the plug-in will restart its processing. The Reset() function of the plug-in is called on bypass so that its delay line and other state information can be cleared for a fresh new start when the effect is finally unbypassed. Refer to AK::IAkPlugin::Reset() for more information.
|
Caution: Bypassing and unbypassing plug-ins at runtime may result in signal discontinuities depending on the plug-in and on the sound material being processed. |
AKRESULT CAkGain::Init( AK::IAkPluginMemAlloc * in_pAllocator, // Memory allocator interface. AK::IAkEffectPluginContext * in_pFXCtx, // FX Context. AK::IAkPluginParam * in_pParams, // Effect parameters. AkAudioFormat & in_rFormat // Required audio input format. ) { if ( in_pFXCtx->IsSendModeEffect() ) // Effect used in environmental (sent) context ... else // Effect inserted in DSP chain ... }
For more information, refer to the following sections:
For monitoring uses, a plug-in may need to send information back to the Wwise plug-in. Common examples would be for sending information relative to the audio signals (for example VU meters), run-time information such as memory consumed or other.
Before posting data to be sent asynchronously through the profiling system, your effect should first determine if it is possible to send data to the UI counterpart of this effect plug-in instance using a call to AK::IAkEffectPluginContext::CanPostMonitorData(). You will need to cache a pointer to the plug-in execution context that is handed to the plug-in at the effect initialization in order to do this. Note that it is only possible to post data when the instance of the plug-in is on a bus, because in this case there is a one-to-one relationship with its effect settings view.
If data can be sent and that the build target is one that can communicate with Wwise (i.e. not the release target), you can post a data block of any size, organized at your liking using a call to AK::IAkEffectPluginContext::PostMonitorData(). Once the data has been posted for monitoring, you can safely discard the data block from the plug-in.
void MyPlugin::Execute( AkAudioBuffer * io_pBuffer ) { // Algorithm tracks signal peaks for all m_uNumChannels channels inside the following array float fChannelPeaks[MAX_NUM_CHANNELS]; ... #ifndef AK_OPTIMIZED if ( m_pCtx->CanPostMonitorData() ) { unsigned int uMonitorDataSize = sizeof( unsigned int ) * m_uNumChannels*sizeof(float); char * pMonitorData = (char *) AkAlloca( uMonitorDataSize ); *((unsigned int *) pMonitorData ) = m_uNumChannels; memcpy( pMonitorData+sizeof(unsigned int), fChannelPeaks, m_uNumChannels*sizeof(float) ); m_pCtx->PostMonitorData( pMonitorData, uMonitorDataSize ); // pMonitorData can now be released } #endif ... }
프로젝트를 등록하세요. 아무런 조건이나 의무 사항 없이 빠른 시작을 도와드리겠습니다.
Wwise를 시작해 보세요