Version

menu_open
Wwise SDK 2022.1.18
Creating Sound Engine Object Processor Plug-ins
Warning: Object Processors are supersets of Effect Plug-Ins in that they can implement everything an Effect Plug-In can and offer additional functionality. However, they are more complicated to write. If you don't need to process multiple Audio Objects at once, and don't need to know about the object that is processed but only its audio signal, then you should write an Effect Plug-In instead. See Implementing an Effect Plug-in Interface.

Introduction

Object Processors are similar to Effect Plug-Ins (see Implementing an Effect Plug-in Interface) in that they are inserted in one of the slots of the Effect tab of any Wwise Object. The difference with Effect Plug-Ins is apparent when used with Audio Objects. In essence, Effect Plug-Ins are agnostic with respect to Audio Objects (AkAudioObject) and only process audio signals, while Object Processors are aware of all Audio Objects passing through a bus, process them all at once and can access their metadata.

While effect plug-ins receive a single AkAudioBuffer, Object Processors receive an instance of AkAudioObjects that provides:

  • An array of AkAudioBuffer instances representing the Audio Objects' signal.
  • An array of AkAudioObject instances representing the Audio Objects' metadata.

These concepts are explained in the next section.

Audio Objects: Concepts

An Audio Object contains an audio signal, which may be mono or multi-channel. In Wwise, Audio Objects are most easily observed when a bus's configuration is set to Audio Objects. These busses are referred to as Audio Object busses. Instead of mixing its inputs into a single buffer, multi-channel or not, an Audio Object bus gathers Audio Objects and retains their metadata. An Audio Object bus supports a dynamic number of Audio Objects, while a non-Audio Object bus may be seen as a bus that only supports a single Audio Object at any one time.

Audio Objects' metadata essentially consists of positioning information, as well as an array of custom metadata, which are themselves plug-ins and may be used by Object Processor plug-ins. Refer to Audio Object Metadata for more details.

Implementing an Object Processor

When mounted on an Audio Object bus, an Effect Plug-In will be instantiated as many times as there are Audio Objects, and the lifetime of each instance corresponds to the lifetime of the Audio Object to which it is assigned. Indeed, a single instance cannot be reused to process each Audio Object in turn, because the Effect may be maintaining a state that is required to guarantee continuity of the audio with the following frame.

On the other hand, an Object Processor is instantiated once, regardless of the number of Audio Objects, and processes all these objects at once.

Object Processors implement AK::IAkInPlaceObjectPlugin or AK::IAkOutOfPlaceObjectPlugin, depending on whether they are processing in-place or out-of-place. Plug-ins declare that they are Object Processors by returning AkPluginInfo::bCanProcessObjects set to true from AK::IAkPlugin::GetPluginInfo. They declare that they are in-place or out-of-place by setting AkPluginInfo::bIsInPlace accordingly.

Warning: Out-of-place Object Processors are only supported on busses. As is the case with Effect Plug-Ins on busses, they cannot change the rate at which they consume and produce data (AkPluginInfo::bCanChangeRate cannot be true). They can however change the output objects and their channel configuration.

Only the functions specific to these interfaces are covered here. Refer to Creating Sound Engine Plug-ins, for information about interface components shared with other plug-in types (AK::IAkPlugin interface), and Implementing an Effect Plug-in Interface for information about the functionality shared with Effect Plug-Ins, such as initialization, bypassing, and monitoring.

Implementing an In-Place Object Processor

In-place Object Processors receive an array of Audio Objects' buffers and metadata (AkAudioBuffer and AkAudioObject, respectively) in AK::IAkInPlaceObjectPlugin::Execute. They are able to read and modify the audio signal and metadata of all Audio Objects, but they cannot create or remove Audio Objects, or change their channel configuration.

Note that when in-place Object Processors are inserted as Effects on objects in the Actor-Mixer or Interactive Music Hierarchies, as opposed to on busses, Object Metadata will be invalid during Execute, and any modifications to the Metadata will not be preserved. An Object Processor can check for this by determining if Object Keys are equal to AK_INVALID_AUDIO_OBJECT_ID or not, to either gracefully handle this scenario, or flag an error.

The Compressor is an example of an in-place Object Processor. It needs to be an Object Processor, because its algorithm depends on knowing the audio signal of all Audio Objects at once, but it does not modify the list of Audio Objects. It simply modifies the audio signal of the objects that pass through.

void CAkCompressorFX::Execute(
const AkAudioObjects& io_objects // Input objects and object buffers.
)
{
// Downmix Audio Objects' signal into a working buffer "m_estBuffer",
// perform gain estimation from that buffer and compute a gain to apply to each sample of each Audio Object.
// ...
// Apply compressor gain (samples of m_estBuffer) to each sample of each Audio Object.
// For each object
for (AkUInt32 i = 0; i < io_objects.uNumObjects; ++i)
{
// For each channel of this object.
const AkUInt32 uNumChannels = io_objects.ppObjectBuffers[i]->NumChannels();
for (AkUInt32 j = 0; j < uNumChannels; ++j)
{
AkReal32* pInBuf = io_objects.ppObjectBuffers[i]->GetChannel(j);
const AkReal32* pInBufEnd = pInBuf + io_objects.ppObjectBuffers[i]->uValidFrames;
AkReal32* estBuf = m_estBuffer;
while (pInBuf < pInBufEnd)
{
*pInBuf *= *estBuf;
++estBuf;
++pInBuf;
}
}
}
}

The Compressor is of course capable of processing a single object, that is, it works on traditional channel-based busses. It thus supersedes its old Effect Plug-In realization.

Handling Tails in In-Place Object Processors

Like Effect plug-ins, in-place Object Processors can handle tails by changing the AkAudioBuffer::eState field from AK_NoMoreData to AK_DataReady for the desired number of frames. Refer to Implementing Effect Plug-in Tails for more details. It is important to note however that Object Processors need to handle tails of all their Audio Objects independently. They would thus need to keep track of each object.

Caution: Do not store the addresses of audio objects passed to Execute. They can change from frame to frame. The preferred way of identifying an object is to use the AkAudioObject::key field.

Implementing an Out-of-Place Object Processor

Out-of-place Object Processors manage distinct sets of Audio Objects at their input and output. The input Audio Objects depend on the host bus, while the output Audio Objects are created explicitly by the plug-in, using one of two methods discussed in the following sections. All these objects are passed to the plug-in at every frame via AK::IAkOutOfPlaceObjectPlugin::Execute.

Out-of-Place Object Processors and Bus Configuration

The channel configuration of the output Audio Objects is decided by the plug-in.

Additionally, a non-Audio Object (or "single-object") bus is essentially a special case of the general Audio Object bus, but with only one Audio Object. Since an Object Processor can receive and produce any number of Audio Objects indifferently, it is thus capable of making a single-object bus output a dynamic number of Audio Objects, and vice-versa, making an Audio Object bus output a single Audio Object as if it were a traditional mixing bus.

Note: When inserted on a non-Audio Object bus, an Object Processor receives the actual channel configuration of that bus in AK::IAkEffectPlugin::Init. Since Object Processors are supersets of Effect Plug-Ins, you should strive to make yours work seamlessly, whether it is inserted on an Audio Object or non-Audio Object bus. That is, unless it is a user error to insert it on a non-Audio Object bus. For example, it does not make much sense for users to insert the Software Binauralizer plug-in on a non-Audio Object bus, because the downmixed audio would not carry any useful positioning information. In this case, it is better to let your users know that they probably made a mistake.

Handling Tails and Object Lifetime in Out-of-Place Object Processors

The sound engine resets all input and output Audio Objects' AkAudioBuffer::eState to AK_NoMoreData at the beginning of every frame. Any Audio Object whose AkAudioBuffer::eState remains set to AK_NoMoreData after processing is destroyed. Object Processors are not destroyed until all input and output objects have been destroyed. Thus, in order for an out-of-place Object Processor to continue producing audio when it no longer has any input Audio Objects, it simply needs to keep one or multiple output Audio Objects alive by setting their state to AK_DataReady.

Caution: Use caution if you keep track of objects within your Object Processor. Make sure not to refer to an object after it has been destroyed by the sound engine.
Note: Input objects will be kept alive if you set their AkAudioBuffer::eState to AK_DataReady, but this should be avoided since it is not necessary to produce tails.
Note: An out-of-place Object Processor that outputs zero Audio Object outputs silence.

Out-of-Place Object Processing Examples

Let's explore out-of-place object processing with the three following canonical examples.

Software Binauralizer

A software binauralizer may be implemented as an out-of-place Object Processor that takes in multiple Audio Objects, and outputs a single Audio Object with a stereo channel configuration. Such a plug-in should be mounted on an Audio Object bus, but it will effectively make this bus output a single stereo signal.

A convenient way to create the one and only stereo output object is via the handshaking method of AK::IAkEffectPlugin::Init.

AK::IAkPluginMemAlloc* in_pAllocator,
AK::IAkPluginParam* in_pParams,
AkAudioFormat& io_rFormat)
{
m_pContext = in_pContext;
// in_rFormat.channelConfig.eConfigType will be different than AK_ChannelConfigType_Objects if the configuration of the input of the plugin is known and does not support a
// dynamic number of objects. However this plug-in is pointless if it is not instantiated on an Audio Object bus, so we are better off letting our users know.
// Inform the host that the output will be stereo. The host will create an output object for us and pass it to Execute.
return AK_Success;
}

Then, in Execute:

void ObjectBinauralizerFX::Execute(
const AkAudioObjects& in_objects, ///< Input objects and object audio buffers.
const AkAudioObjects& out_objects ///< Output objects and object audio buffers.
)
{
AKASSERT(in_objects.uNumObjects > 0); // Should never be called with 0 objects if this plug-in does not force tails.
AKASSERT(out_objects.uNumObjects == 1); // Output config is a stereo channel stream.
// "Binauralize" (just mix) objects in stereo output buffer.
// For the purpose of this demonstration, instead of applying HRTF filters, let's call the built-in service to compute panning gains.
// The output object should be stereo. Clear its two channels.
memset(out_objects.ppObjectBuffers[0]->GetChannel(0), 0, out_objects.ppObjectBuffers[0]->MaxFrames() * sizeof(AkReal32));
memset(out_objects.ppObjectBuffers[0]->GetChannel(1), 0, out_objects.ppObjectBuffers[0]->MaxFrames() * sizeof(AkReal32));
for (AkUInt32 i = 0; i < in_objects.uNumObjects; ++i)
{
// State management: set the output to AK_DataReady as long as one of the inputs is AK_DataReady. Otherwise set to AK_NoMoreData.
if (in_objects.ppObjectBuffers[i]->eState != AK_NoMoreData)
eState = in_objects.ppObjectBuffers[i]->eState;
// Prepare mixing matrix for this input.
const AkUInt32 uNumChannelsIn = in_objects.ppObjectBuffers[i]->NumChannels();
uNumChannelsIn,
2);
AK::SpeakerVolumes::Matrix::Zero(mx, uNumChannelsIn, 2);
// Compute panning gains and fill the mixing matrix.
m_pContext->GetMixerCtx()->ComputePositioning(
in_objects.ppObjects[i]->positioning,
out_objects.ppObjectBuffers[0]->GetChannelConfig(),
mx
);
// Using the mixing matrix, mix the channels of the ith input object into the one and only stereo output object.
AK_GET_PLUGIN_SERVICE_MIXER(m_pContext->GlobalContext())->MixNinNChannels(
in_objects.ppObjectBuffers[i],
out_objects.ppObjectBuffers[0],
1.f,
1.f,
mx, /// NOTE: To properly interpolate from frame to frame and avoid any glitch, we would need to store the previous matrix (OR positional information) for each object.
mx);
}
// Set the output object's state.
out_objects.ppObjectBuffers[0]->uValidFrames = in_objects.ppObjectBuffers[0]->MaxFrames();
out_objects.ppObjectBuffers[0]->eState = eState;
}

3D Panner

A 3D Panner can be implemented such that it works like the Software Binauralizer described above. However, one elegant way to implement it is by having it instantiate a set of output Audio Objects that each correspond to a spatialized virtual microphone. By doing so, these Audio Objects' signals will be panned by a bus or device downstream, which will make the best use out of the positioning metadata of these virtual microphones.

In AK::IAkEffectPlugin::Init, instead of returning a non-object configuration, create output objects explicitly using AK::IAkEffectPluginContext::CreateOutputObjects.

static const int k_uNumObjectsOut = 6;
AK::IAkPluginMemAlloc* in_pAllocator,
AK::IAkPluginParam* in_pParams,
AkAudioFormat& in_rFormat
)
{
// Create output objects.
// Desired channel configuration for all new objects: mono.
AkChannelConfig channelConfig;
// AkAudioObjects::uNumObjects, the number of objects to create.
// AkAudioObjects::ppObjectBuffers, Returned array of pointers to the object buffers newly created, allocated by the caller. Pass nullptr if they're not needed.
// AkAudioObjects::ppObjects, Returned array of pointers to the objects newly created, allocated by the caller. Pass nullptr if they're not needed.
AkAudioObject* ppObjects[k_uNumObjectsOut];
AkAudioObjects outputObjects;
outputObjects.uNumObjects = k_uNumObjectsOut;
outputObjects.ppObjects = ppObjects;
outputObjects.ppObjectBuffers = nullptr; // not needed.
AKRESULT res = in_pContext->CreateOutputObjects(
channelConfig,
outputObjects
);
if (res == AK_Success)
{
// Set output objects' 3D positions as if they were laid out as a 5.1 config around the listener.
// FL
ppObjects[0]->positioning.threeD.xform.SetPosition(-0.707f, 0.f, 0.707f);
// Store the objects' keys so we can later retrieve them (see helper below).
m_objectKeys[0] = ppObjects[0]->key;
// FR
//...
}
}
// Helper function: find an object having key in_key in array in_objects.
static AkUInt32 FindOutputObjectIdx(AkAudioObjectID in_key, AkAudioObject** in_objects)
{
for (int i = 0; i < k_uNumObjectsOut; i++)
{
if (in_objects[i]->key == in_key)
return i;
}
AKASSERT(false);
return -1;
}
void Ak3DPannerFX::Execute(
const AkAudioObjects& in_objects, // Input objects and object audio buffers.
const AkAudioObjects& out_objects // Output objects and object audio buffers.
)
{
// Compute panning of each object into a temp buffer tempBuffer.
// ...
// Copy each channel of the temp buffer to its corresponding output object.
AKASSERT(k_uNumObjectsOut == out_objects.uNumObjects);
for (int i = 0; i < k_uNumObjectsOut; i++)
{
// Find corresponding output object.
// In Execute, the order of objects is not reliable. We need to search for each object in the array of output objects, using the helper defined above.
AkUInt32 idx = FindOutputObjectIdx(m_objectKeys[i], out_objects.ppObjects);
// Copy the ith temp buffer's channel into the proper output object.
memcpy(out_objects.ppObjectBuffers[idx]->GetChannel(0), tempBuffer.GetChannel(i), tempBuffer.uValidFrames * sizeof(AkReal32));
// Set the output object's state to avoid garbage collection by the host.
out_objects.ppObjectBuffers[idx]->uValidFrames = tempBuffer.uValidFrames;
out_objects.ppObjectBuffers[idx]->eState = tempBuffer.eState;
}
}

In the above example, you may wonder why (-0.707f, 0.f, 0.707f) represents the front left. See On 3D Transformations for details.

Particle Generator

For each of its input Audio Objects, a Particle Generator would create N output Audio Objects randomly positioned around the position of their corresponding object. This type of Object Processor cannot create objects in Init, it needs to create them dynamically in Execute, keep track of them and keep track of the input objects. When an input object's state is AK_NoMoreData, the state of the corresponding output objects should be set to AK_NoMoreData as well. This ensures that they get garbage collected by the sound engine.

Caution: If an out-of-place object processor calls AK::IAkEffectPluginContext::CreateOutputObjects from within Execute, it cannot reliably access the output objects passed in out_objects. In that case it must use AK::IAkEffectPluginContext::GetOutputObjects.
// The plugin needs to maintain a map of input object keys to generated objects. Like so:
struct GeneratedObjects
{
AkUInt32 numObjs;
AkAudioObjectID apObjectKeys[AK_MAX_GENERATED_OBJECTS];
int index; /// We use an index mark each output object as "visited" and map them to input objects (index in the array) at the same time.
};
void ParticleGeneratorFX::Execute(
const AkAudioObjects& in_objects, ///< Input objects and object audio buffers.
const AkAudioObjects& out_objects ///< Output objects and object audio buffers.
)
{
AKASSERT(in_objects.uNumObjects > 0); // Should never be called with 0 objects if this plug-in supports no tail.
// Object bookkeeping.
for (AkUInt32 i = 0; i < in_objects.uNumObjects; ++i)
{
// Find this object in our map.
AkAudioObject * inobj = in_objects.ppObjects[i];
GeneratedObjects * pEntry = m_mapInObjsToOutObjs.Exists(inobj->key);
if (pEntry)
pEntry->index = i; // Found. Note down the index for later.
else
{
// New. Create a new entry and new associated output objects.
pEntry = m_mapInObjsToOutObjs.AddInput(inobj->key);
if (pEntry)
{
AkUInt32 numObjsOut = 1;
{
// If "3D".
// Create between one and AK_MAX_GENERATED_OBJECTS output objects.
AkReal32 fRandom = rand() / ((AkReal32)RAND_MAX);
numObjsOut = (AkUInt32)(fRandom * (AK_MAX_GENERATED_OBJECTS - 1) + 1.f);
}
// Else just create one object.
AkAudioObject ** arNewObjects = (AkAudioObject**)AkAlloca(numObjsOut * sizeof(AkAudioObject*));
AkAudioObjects outputObjects;
outputObjects.uNumObjects = numObjsOut;
outputObjects.ppObjectBuffers = nullptr;
outputObjects.ppObjects = arNewObjects;
if (m_pContext->CreateOutputObjects(in_objects.ppObjectBuffers[i]->GetChannelConfig(), outputObjects) == AK_Success)
{
for (AkUInt32 iObj = 0; iObj < numObjsOut; iObj++)
{
AkAudioObject * pObject = arNewObjects[iObj];
pEntry->apObjectKeys[iObj] = pObject->key;
// Copy the input object's positional metadata, but randomize the actual position.
pObject->positioning.threeD = inobj->positioning.threeD;
// Randomize position and assign to output object.
/// NOTE: By randomizing position now at object creation time and not updating it with inobj->positioning, particles will remain fixed with the listener's head throughout their
/// existence. We could choose to instead store an offset in our map and apply it to inobj at each frame.
AkVector pos = ComputeRandomPosition(inobj->positioning.threeD.xform.Position());
}
pEntry->numObjs = numObjsOut;
pEntry->index = i;
}
}
}
}
// Copy input objects' signal to corresponding output objects. Garbage collect objects (on our side) along the way.
// We cannot use out_objects because we changed the collection of objects during this call! Use GetOutputObjects instead.
// First, query the number of objects.
AkAudioObjects outputObjects;
outputObjects.uNumObjects = 0;
outputObjects.ppObjectBuffers = nullptr;
outputObjects.ppObjects = nullptr;
m_pContext->GetOutputObjects(outputObjects);
if (outputObjects.uNumObjects > 0)
{
// Allocate arrays on the stack and retrieve the output objects.
AkAudioBuffer ** buffersOut = (AkAudioBuffer **)AkAlloca(outputObjects.uNumObjects * sizeof(AkAudioBuffer*));
AkAudioObject ** objectsOut = (AkAudioObject **)AkAlloca(outputObjects.uNumObjects * sizeof(AkAudioObject*));
m_pContext->GetOutputObjects(outputObjects);
// Iterate through our internal map.
while (it != m_mapInObjsToOutObjs.End())
{
// Has the input object been passed to Execute?
if ((*it).pUserData->index >= 0)
{
// Yes. Copy its signal into each of its associated output objects.
AkAudioBuffer* inbuf = in_objects.ppObjectBuffers[(*it).pUserData->index];
const AkUInt32 uNumChannels = inbuf->NumChannels();
for (AkUInt32 out = 0; out < (*it).pUserData->numObjs; out++)
{
// Find output object.
AkAudioBuffer * pBufferOut = nullptr;
AkAudioObject * pObjOut = nullptr;
for (AkUInt32 i = 0; i < outputObjects.uNumObjects; i++)
{
if (objectsOut[i]->key == (*it).pUserData->apObjectKeys[out])
{
pBufferOut = buffersOut[i];
pObjOut = objectsOut[i];
break;
}
}
if (pObjOut)
{
// Copy each channel.
for (AkUInt32 j = 0; j < uNumChannels; ++j)
{
AkReal32* pInBuf = inbuf->GetChannel(j);
AkReal32* outBuf = pBufferOut->GetChannel(j);
memcpy(outBuf, pInBuf, inbuf->uValidFrames * sizeof(AkReal32));
}
// Copy state.
pBufferOut->uValidFrames = inbuf->uValidFrames;
pBufferOut->eState = inbuf->eState;
// Also, since there is a clear association of input objects to output objects, let's propagate the associated input object's custom metadata to the output.
pObjOut->arCustomMetadata.Copy(in_objects.ppObjects[(*it).pUserData->index]->arCustomMetadata);
}
}
(*it).pUserData->index = -1; // "clear" index for next frame.
++it;
}
else
{
// Destroy stale objects.
// Output objects are collected by the host if we don't set their eState explicitly to AK_DataReady.
// However, here we need to get rid of them on our side otherwise our map would grow indefinitely.
it = m_mapInObjsToOutObjs.EraseSwap(it);
}
}
}
}

Assigning a Name to an Audio Object

In the authoring tool's Audio Object Profiler, Audio Objects are given the name of the Wwise Objects from which they were instigated. Output objects of an out-of-place Object Processor are therefore all named with the name of the hosting bus. In order to make profiling easier, it is recommended to use AkAudioObject::SetName in order to provide a name to output objects, when it makes sense to do so.

For example, when creating objects in the 3D Panner above, we could have named them like this:

// FL
ppObjects[0]->SetName(in_pAllocator, "FL");
//...

Audio Object Metadata

The AkAudioObject struct encompasses all the Audio Object metadata that travels along Audio Objects' audio buffers throughout the object pipeline. It can be split in three categories:

Positioning Metadata

An Audio Object carries in AkAudioObject::positioning the positioning data of the sound from which it was instigated. AkAudioObject::positioning.behavioral holds all the relevant positioning settings that can be found on Wwise Objects. For example, if the sound uses speaker panning, AkAudioObject::positioning.behavioral.panType will be set to one of the panner types, and panLR, panBF and panDU will correspond to the position of the panner.

If the spatialization mode is 3D (AK_SpatializationMode_PositionOnly or AK_SpatializationMode_PositionAndOrientation), AkAudioObject::positioning.threeD contains all the data that relates to the 3D position:

  • AkAudioObject::positioning.threeD.xform will typically inherit the position of the associated game object, although it can be modified or overriden for each sound depending on its 3D positioning settings, such as 3D automation.
  • AkAudioObject::positioning.threeD.spread and AkAudioObject::positioning.threeD.focus are usually calculated from the Attenuation Curves.

Cumulative Gain Metadata

An Audio Object carries with it the cumulative gain that has been applied upstream, such as the setting of a source's Volume, or changes in gain from busses and connections between busses. In a simple scenario, with no Effects or Object Processors, this means the gain of an Audio Object isn't applied to its audio signal until it's finally mixed down into a speaker bed, or sent to the system output as an Audio Object. This results in an audio mix that changes smoothly by avoiding ramping gains to an Audio Object's audio signal multiple times in a frame, especially when Audio Objects are created and destroyed due to Game Objects adding or removing positions.

Support for this metadata is optional for Object Processors and enabled by setting IAkPluginInfo::bUsesGainAttribute to true in the Object Processor's implementation of IAkPlugin::GetPluginInfo. If bUsesGainAttribute is left to false, then all audio buffers passed to Execute will have the cumulative gain of the Audio Object applied before execution, and the gain passed to the Object Processor will be neutral. However, if bUsesGainAttribute is set to true, then the audio buffers will not be modified, and the cumulative gain may be a non-unit value. This allows the Object Processor to acknowledge the gain as needed, and also modify the cumulative gain of the Audio Object as desired.

If the cumulative gain of the Audio Object is to be modified, take note that the value is an AkRamp, and continuity of the fNext value on one frame to the fPrev value on the next frame is not automatically handled by the sound engine. That is, if an Object Processor intends to modify fNext on one frame, then the same modification must be applied to fPrev on the next frame. If this is not managed properly, there may be audio glitches, or discontinuities in the audio signal, when some other part of the audio pipeline has to consume the Audio Object's gain.

On 3D Transformations

If the spatialization mode AkAudioObject::positioning.behavioral.spatMode is 3D (AK_SpatializationMode_PositionOnly or AK_SpatializationMode_PositionAndOrientation), the 3D position is transformed (translated and rotated) such that it is relative to the game object (listener) associated to the bus in which it travels. For example, a sound may be positioned at (2, 0, 0); when it is processed by an Audio Object bus associated to a listener at (10, 0, 0), the resulting Audio Object will be positioned at (-8, 0, 0). Object Processors on this bus will "see" the Audio Object as being located at (-8, 0, 0).

The coordinate system in Wwise is left-handed and the default orientation's front vector points towards Z and the top vector points towards Y, as defined in AkCommonDefs.h.

/// Default listener transform.
#define AK_DEFAULT_LISTENER_POSITION_X (0.0f) // at coordinate system's origin
#define AK_DEFAULT_LISTENER_POSITION_Y (0.0f)
#define AK_DEFAULT_LISTENER_POSITION_Z (0.0f)
#define AK_DEFAULT_LISTENER_FRONT_X (0.0f) // looking toward Z,
#define AK_DEFAULT_LISTENER_FRONT_Y (0.0f)
#define AK_DEFAULT_LISTENER_FRONT_Z (1.0f)
#define AK_DEFAULT_TOP_X (0.0f) // top towards Y
#define AK_DEFAULT_TOP_Y (1.0f)
#define AK_DEFAULT_TOP_Z (0.0f)

With the Software Binauralizer, for example, input objects have been rotated prior to reaching the plug-in, such that positive Z is to the front of the listener, positive X is to its right, and so on. This is also why the service AK::IAkMixerPluginContext::ComputePositioning used in that example does not take the listener's orientation: it assumes default orientation.

In the 3D Panner example above, (-0.707, 0, 0.707) thus points at 45 degrees towards the front left of the game object associated to the hosting bus.

Custom Metadata Plug-Ins

Object Processors are able to access custom metadata attached to Audio Objects.

Custom metadata is a type of plug-in that only consists of a set of parameters. In the authoring, metadata plug-ins can be added on any Wwise Object, and they support ShareSets. In the sound engine, they implement the AK::IAkPluginParam interface.

At every frame, Audio Objects gather all the metadata plug-ins attached to the Sound from which they were instigated, if applicable, and adds them to their AkAudioObject::arCustomMetadata array. Then, they gather all the metadata plug-ins attached to every bus they visit. Object Processors, in-place or out-of-place, can read this array. Of course, they need to know about the plug-in in order to interpret its content.

The implementor of an Object Processor can write one or multiple companion metadata plugins that users can add to Wwise Objects.

For example, imagine that in the Software Binauralizer described previously (see Software Binauralizer), you want to support a passthrough mode, so that certain sounds don't undergo HRTF filtering. You would create a companion metadata plug-in ObjectBinauralizerMetadata with a boolean property called Passthrough. Your users would be able to add this plug-in to any Wwise Object they want to opt out of HRTF. Then, in your Object Processor's Execute():

for (AkUInt32 i = 0; i < in_objects.uNumObjects; ++i)
{
// Assume that Passthrough Mode is false if there isn't an ObjectBinauralizerMetadata metadata on the Audio Object.
bool bPassthrough = false;
// Search for it.
for (AkAudioObject::ArrayCustomMetadata::Iterator it = in_objects.ppObjects[i]->arCustomMetadata.Begin(); it != in_objects.ppObjects[i]->arCustomMetadata.End(); ++it)
{
if ((*it).pluginID == AKMAKECLASSID(AkPluginTypeMetadata, MYCOMPANYID, OBJECT_BINAURALIZER_METADATA_ID))
{
// The metadata plugin ID matches the type we are looking for. We can thus safely cast the AK::IAkPluginParam to a known type.
ObjectBinauralizerMetadata * pMetadata = (ObjectBinauralizerMetadata*)(*it).pParam;
bPassthrough = pMetadata->bPassthrough;
}
}
// Do something with bPassthrough
// ...
}
Note: Since Audio Objects gather metadata plug-ins from each visited Wwise Object, it is possible that you find several instances of the same plug-in type on a single Audio Object. It is up to you to determine what the policy is when that happens, and inform your users.
@ AK_UnsupportedChannelConfig
Channel configuration is not supported in the current execution context.
Definition: AkTypes.h:246
#define AK_SPEAKER_SETUP_MONO
1.0 setup channel mask
AkSampleType * GetChannel(AkUInt32 in_uIndex)
Definition: AkCommonDefs.h:565
AkPositioningData positioning
Positioning data for deferred 3D rendering.
Definition: AkCommonDefs.h:320
AkForceInline AkChannelConfig GetChannelConfig() const
Definition: AkCommonDefs.h:492
AkUInt64 AkAudioObjectID
Audio Object ID.
Definition: AkTypes.h:156
AKRESULT Copy(const AkArray< T, ARG_T, TAlloc, TGrowBy, TMovePolicy > &in_rSource)
Definition: AkArray.h:827
AkAudioObjectID key
Unique ID, local to a given bus.
Definition: AkCommonDefs.h:318
const AkVector & Position() const
Get position vector.
Definition: AkTypes.h:612
AkForceInline AkUInt32 NumChannels() const
Get the number of channels.
Definition: AkCommonDefs.h:481
AKRESULT
Standard function call result.
Definition: AkTypes.h:199
AkUInt32 uChannelMask
Channel mask (configuration).
AkForceInline AkUInt32 GetRequiredSize(AkUInt32 in_uNumChannelsIn, AkUInt32 in_uNumChannelsOut)
Compute size (in bytes) required for given channel configurations.
@ AK_NoMoreData
No more data is available from the source.
Definition: AkTypes.h:212
#define AkAllocaSIMD(_size_)
AkChannelConfig channelConfig
Channel configuration.
Definition: AkCommonDefs.h:65
AkForceInline void Zero(MatrixPtr in_pVolumes, AkUInt32 in_uNumChannelsIn, AkUInt32 in_uNumChannelsOut)
Clear matrix.
AKSOUNDENGINE_API AKRESULT Init(const AkCommSettings &in_settings)
@ AK_ChannelConfigType_Objects
Object-based configurations.
float AkReal32
32-bit floating point
@ AK_Success
The operation was successful.
Definition: AkTypes.h:201
AKRESULT eState
Execution status.
Definition: AkCommonDefs.h:640
#define AK_GET_PLUGIN_SERVICE_MIXER(plugin_ctx)
Definition: IAkPlugin.h:1812
AkUInt16 uValidFrames
Number of valid sample frames in the audio buffer.
Definition: AkCommonDefs.h:646
AKRESULT SetName(AK::IAkPluginMemAlloc *in_pAllocator, const char *in_szName)
Definition: AkCommonDefs.h:397
#define AKMAKECLASSID(in_pluginType, in_companyID, in_pluginID)
Definition: AkCommonDefs.h:189
USER_DATA * Exists(KEY in_key)
Returns the user data associated with given input context. Returns NULL if none found.
AkAudioBuffer ** ppObjectBuffers
Array of pointers to audio object buffers.
Definition: AkCommonDefs.h:659
USER_DATA * AddInput(KEY in_key)
Adds an input with new user data.
#define AKASSERT(Condition)
Definition: AkAssert.h:67
AkUInt32 uNumObjects
Number of audio objects.
Definition: AkCommonDefs.h:658
ArrayCustomMetadata arCustomMetadata
Array of custom metadata, gathered from visited objects. Note: any custom metadata is expected to exi...
Definition: AkCommonDefs.h:348
#define AkAlloca(_size_)
Stack allocations.
AkBehavioralPositioningData behavioral
Positioning data inherited from sound structures and mix busses.
Definition: AkCommonDefs.h:290
AkArray< AkInputMapSlot< KEY, USER_DATA >, const AkInputMapSlot< KEY, USER_DATA > &, AkPluginArrayAllocator >::Iterator EraseSwap(typename AkArray< AkInputMapSlot< KEY, USER_DATA >, const AkInputMapSlot< KEY, USER_DATA > &, AkPluginArrayAllocator >::Iterator &in_rIter)
Erase the specified iterator in the array. but it does not guarantee the ordering in the array.
Iterator Begin() const
Returns the iterator to the first item of the array, will be End() if the array is empty.
Definition: AkArray.h:327
Ak3dData threeD
3D data used for 3D spatialization.
Definition: AkCommonDefs.h:289
void SetPosition(const AkVector &in_position)
Set position.
Definition: AkTypes.h:670
A collection of audio objects. Encapsulates the audio data and metadata of each audio object in separ...
Definition: AkCommonDefs.h:651
AkForceInline void SetStandard(AkUInt32 in_uChannelMask)
Set channel config as a standard configuration specified with given channel mask.
AkReal32 * MatrixPtr
Volume matrix. Access each input channel vector with AK::SpeakerVolumes::Matrix::GetChannel().
@ AK_SpatializationMode_None
No spatialization.
Definition: AkTypes.h:1201
uint32_t AkUInt32
Unsigned 32-bit integer.
AkUInt32 eConfigType
Channel config type (AkChannelConfigType).
virtual AKRESULT CreateOutputObjects(AkChannelConfig in_channelConfig, AkAudioObjects &io_objects)=0
3D vector for some operations in 3D space. Typically intended only for localized calculations due to ...
Definition: AkTypes.h:425
#define AK_SPEAKER_SETUP_STEREO
2.0 setup channel mask
AkMixerInputMap: Map of inputs (identified with AK::IAkMixerInputContext *) to user-defined blocks of...
Defines the parameters of an audio buffer format.
Definition: AkCommonDefs.h:62
@ AK_SpatializationMode_PositionAndOrientation
Spatialization based on both emitter position and emitter orientation.
Definition: AkTypes.h:1203
AkTransform xform
Object position / orientation.
Definition: AkCommonDefs.h:256
AkAudioObject ** ppObjects
Array of pointers to audio objects.
Definition: AkCommonDefs.h:660
Ak3DSpatializationMode spatMode
3D spatialization mode.
Definition: AkCommonDefs.h:281
@ AkPluginTypeMetadata
Metadata plug-in: applies object-based processing to audio data.
Definition: AkTypes.h:1242
AkForceInline AkUInt16 MaxFrames() const
Definition: AkCommonDefs.h:633

Cette page a-t-elle été utile ?

Besoin d'aide ?

Des questions ? Des problèmes ? Besoin de plus d'informations ? Contactez-nous, nous pouvons vous aider !

Visitez notre page d'Aide

Décrivez-nous de votre projet. Nous sommes là pour vous aider.

Enregistrez votre projet et nous vous aiderons à démarrer sans aucune obligation !

Partir du bon pied avec Wwise