Source plug-ins provide audio content to an output buffer using synthesis methods, including physical modeling, modulation synthesis, sampling synthesis, and so on. Writing source plug-ins consists of implementing the AK::IAkSourcePlugin interface as detailed in this document. Only the functions specific to AK::IAkSourcePlugin interface are covered here. Refer to How to Create Wwise Sound Engine Plug-ins for information about interface components shared with other plug-in types. Refer to the provided Sine sample plug-in for details (Samples).
This method prepares the source plug-in for data processing, allocates memory, and sets up initial conditions.
The plug-in is passed in a pointer to a memory allocator interface (AK::IAkPluginMemAlloc). You should perform all dynamic memory allocation through this interface using the provided memory allocation macros (refer to Allocating/De-allocating Memory in Audio Plug-ins). For the most common memory allocation needs, namely allocation at initialization and release at termination, the plug-in does not need to retain a pointer to the allocator. It will also be provided to the plug-in on termination.
The AK::IAkSourcePluginContext interface allows to retrieve information such as the number of loop iterations or other information related to the context in which the source plug-in is operated. It may also access the global context through AK::IAkPluginContextBase::GlobalContext().
The plug-in also receives a pointer to its associated parameter node interface (AK::IAkPluginParam). Most plug-ins will want to keep a reference to the associated parameter node to be able to retrieve parameters at runtime. Refer to Communication Between Parameter Nodes and Plug-ins. for more details.
All of these interfaces will remain valid throughout the plug-in's lifespan so it is safe to keep an internal reference to them when necessary.
The audio format native to the platform on which the plug-in is executed is passed as an argument of the AK::IAkSourcePlugin::Init()
function. It is highly recommended that plug-ins are output in the platform's native format to avoid any performance penalty incurred by audio format conversion. If for some reason the source is not suited to be output in this format, interleaved 16-bit signed samples may also be output. In multi-channel configurations, the source plug-in must also specify whether it is going to output interleaved or deinterleaved data (AkAudioFormat::uInterleaveID
). The default, native setting is AK_NONINTERLEAVED. When outputting deinterleaved data, a source plug-in should use the AkAudioBuffer::GetChannel()
method to access buffers of each channel. When outputting interleaved data, it should use AkAudioBuffer::GetInterleavedData()
. In the case of 7.1 for interleaved data of source plug-ins, the channel ordering is L-R-C-LFE-BL-BR-SL-SR. Refer to Accessing Data Using AkAudioBuffer Structure for more details.
The channel mask passed in will default to a mono (center speaker only) channel configuration. You may change this channel mask to whatever channel configuration you want your source plug-in to output. In the code example below, the channel mask was changed to stereo. The audio buffers to be received at each Execute will be able to accommodate the format specified by the plug-in upon initialization and will not change for the lifespan of the plug-in making it safe to store any format information required for later processing in the initialization routine.
AKRESULT CAkDCOffset::Init( AK::IAkPluginMemAlloc * in_pAllocator, // Memory allocator interface. AK::IAkSourcePluginContext * in_pSourcePluginContext, // Source plugin context AK::IAkPluginParam * in_pParams, // Effect parameters. AkAudioFormat & io_rFormat // Supported audio output format. ) { // Keep a pointer to associated parameter node. m_pParams = reinterpret_cast<CAkMyPluginParams*>( in_pParams ); // Setup helper to handle looping and possibly changing duration of synthesis m_DurationHandler.Setup( m_pParams->fDuration, in_pSourceFXContext->GetNumLoops(), io_rFormat.uSampleRate ); // Output format set to Mono native by default (input). Change to Stereo output. io_rFormat.channelConfig.SetStandard(AK_SPEAKER_SETUP_STEREO); ... return AK_Success; }
|
Note: AK::IAkSourcePlugin::Init() is called every time the effect is instantiated, which happens when a voice starts playing or a mixing bus is instantiated. Since other sounds will typically be already playing, this needs to occur within a reasonable amount of time. If you need to initialize large common/global data structures, then maybe you should do so when registering the plugin library. See Using Global Sound Engine Callbacks From Plugins for more details. |
This method is called by the sound engine after initialization time to determine the approximate duration of the source for the purpose of processing crossfades during sound transitions. The estimated duration of the source should be returned in milliseconds. If looping is applied with a finite number of iterations, the returned duration should correspond to the total duration, considering how many sound iterations will be played. When infinite looping is selected or when the duration of the source is unknown the returned duration should be zero. Note that the number of loop iterations retrieved through AK::IAkSourcePluginContext::GetNumLoops() when infinite looping is selected is always zero.
// Get the duration of the source in milliseconds. AkTimeMs CAkDCOffset::GetDuration( ) const { return m_DurationHandler.GetDuration() * 1000.f; }
|
Note: If RTPC parameters change the duration of the plug-in, the crossfade transition may not last the expected duration. |
|
Note: The easiest way to handle most time elapsed management (including looping) for a source plug-in is to use the AkFXDurationHandler service as shown in the Sine plug-in example (Samples). |
This method is called by the sound engine when a break action is received. A break action is a smooth stop. The plug-in may implement a way to smoothly terminate playing in this function. Usually it should simply stop looping and play the release if there is one. The function must return AK_Success if it ignores the break command or if it handles it. If the plug-in does not handle it, then this function should return AK_Fail, which will cause the source to stop playing.
Stop playback after the current loop iteration AKRESULT CAkDCOffset::StopLooping() { m_DurationHandler.SetLooping( 1 ); // No longer looping. return AK_Success; }
|
Note: This function is optional. If you don't implement it, the source will ignore break commands and play normally until the end. |
This method is called to get an estimate of the normalized amplitude envelope value, between 0 and 1, that a source plugin will generate at the next call to Execute(). In the current version of Wwise, it is used for HDR processing, where the algorithm uses estimates of the envelope to attenuate softer sounds accordingly. This feature is optional: if you return 1 (default), the attenuation of softer sounds will be constant when your plugin is used under an HDR bus.
If you decide to implement it, ensure that it returns the normalized envelope value, that is, that you know the value of the peak of the waveform that you are about to generate, or that amplitude envelope and gain are decoupled so that you may return only the current value of the envelope.
This method executes the source plug-in's audio signal processing algorithm and fills given audio output buffers (refer to Accessing Data Using AkAudioBuffer Structure). Upon function entrance, the AkAudioBuffer::uValidFrames field of the output buffer will always be zero, meaning that there are no valid audio frames in the channel buffers. The AkAudioBuffer::MaxFrames() method returns the maximum number of audio sample frames that the source should fill. The plug-in determines how many output sample frames need to be produced, which may depend on parameter values such as duration. After DSP execution, the source plug-in must tell the audio pipeline how many sample frames were effectively produced by setting the AkAudioBuffer::uValidFrames field of the audio buffer accordingly.
|
Note: In general, the algorithm should always attempt to produce full buffers to avoid pipeline starvation. |
The Execute() routine will be called as long as the source plug-in sets the eState field of the AkAudioBuffer structure to AK_DataReady. When AK_NoMoreData is returned, the plug-in will be terminated and will no longer be called by the audio pipeline. The current version of Wwise supports source plug-ins that output up to 6 channels (5.1 setup, see Channel Ordering).
A DC offset plug-in Execute() function is provided below. Refer to CAkSrcSine::Execute() and the various tone generator DSP routines for more details.
// This example demonstrates a simple application which outputs a steady DC offset signal using an RTPC parameter. void CAkDCOffset::Execute( AkAudioBuffer * io_pBuffer ) { // Set new duration when it changes (e.g. RTPC parameter) m_DurationHandler.SetDuration( m_pParams->fDuration ); // Determine how many sample frames to produce this execution and set uValidFrames and eState according to current state m_DurationHandler.ProduceBuffer( io_pBuffer ); // Retrieve RTPC DC offset parameter AkReal32 fDCOffset = m_pParams->GetDCOffset( ); // DC offset output DSP (supports any number of channels) for ( unsigned int i = 0; i < m_uNumChannels; ++i ) { AkSampleType * pBufOut = io_pBuffer->GetChannel(i); // AkSampleType is platform specific (AkReal32 on software platforms). AkUInt32 uFrameCount = io_pBuffer->uValidFrames; while ( uFrameCount-- ) { *pBufOut++ = AK_FLOAT_TO_SAMPLETYPE( fDCOffset ); // DC-offset output, will convert normalized float to platform supported format. } } }
AK::IAkSourcePlugin::TimeSkip() replaces Execute() when a virtual voice is set to "Play from elapsed time". This allows the source plug-ins to keep updating their internal state (such as advanced synthesis time) if desired. It can be used to simulate processing that would have taken place, while avoiding most of the CPU hit of plug-in execution. Given the number of frames requested, adjust the number of frames that would have been produced by a call to Execute() in the io_uFrames parameter and return AK_DataReady or AK_NoMoreData, depending on whether or not there would be audio output at that point.
Returning AK_NotImplemented will trigger a normal execution of the voice (as if it were not virtual). Therefore, it does not enable the CPU savings of a proper "Play from elapsed time" behavior.
// This example shows how to skip the processing of some frames when the virtual voice is set to "Play from elapsed time". AKRESULT CAkDCOffset::TimeSkip( AkUInt32 &io_uFrames ) { AkUInt16 uValidFrames = (AkUInt16)io_uFrames; AkUInt16 uMaxFrames = (AkUInt16)io_uFrames; AKRESULT eResult = m_DurationHandler.ProduceBuffer( uMaxFrames, uValidFrames ); io_uFrames = uValidFrames; return eResult; } \
For more information, refer to the following sections: Wwise Sound Engine Plug-ins Overview, Effect Plug-in Interface Implementation, Writing the Wwise Authoring Part of an Audio Plug-in
Questions? Problems? Need more info? Contact us, and we can help!
Visit our Support pageRegister your project and we'll help you get started with no strings attached!
Get started with Wwise