版本

menu_open
Wwise SDK 2024.1.1
创建声音引擎对象处理器插件
警告: 对象处理器是效果器插件的超集,可实现效果器插件的所有功能,还可提供其所不具备的功能。不过,编写起来比较麻烦一些。若不需要同时处理多个 Audio Object,并且无需知道处理的对象而只需了解其音频信号,则应编写效果器插件而非对象处理器(参见 实现效果器插件接口 章节)。

简介

对象处理器与效果器插件(参见 实现效果器插件接口 章节)类似,两者都是插入到 Wwise 对象所对应 Effect 选项卡的某个插槽中。不过在结合使用 Audio Object 时,其与效果器插件有明显的区别。从本质上来说,效果器插件无法感知 Audio Object (AkAudioObject),而只能处理音频信号;对象处理器则可感知通过总线发送的所有 Audio Object,进而对其进行统一处理并访问相应的元数据。

效果器插件接收单个 AkAudioBuffer,对象处理器则接收 AkAudioObject 实例。该实例提供:

下节对这些概念进行了阐释。

Audio Object:概念

Audio Object 携带有音频信号,单声道或多声道都可以。在 Wwise 中,可通过将总线配置设为 Audio Objects 来轻松查看 Audio Object。这些总线称为 Audio Object 总线。Audio Object 总线会汇集 Audio Object 并保留其元数据,而非将其输入混音到单个缓冲区中(无论多声道与否)。Audio Object 总线可灵活地支持不同数量的 Audio Object,非 Audio Object 总线在任意时刻都只支持一个 Audio Object。

Audio Object 的元数据主要由定位信息及一系列自定义元数据构成,其本身就是插件并可由对象处理器插件使用。有关更多详细信息,请参阅 Audio Object 元数据 章节。

实现对象处理器

在添加到 Audio Object 总线上时,有多少个 Audio Object,效果器插件就会被实例化多少次。各个实例的生命周期与其被指派到的 Audio Object 的生命周期对应。事实上,单个实例不可重复用来轮流处理各个 Audio Object。因为效果器可能需要保持在某个状态,来确保下一帧音频的连续性。

另一方面,无论有多少个 Audio Object,对象处理器都只会被实例化一次。它会一并处理所有这些对象。

对象处理器会依据是否实施原地处理来决定实现 AK::IAkInPlaceObjectPlugin 还是 AK::IAkOutOfPlaceObjectPlugin。插件通过 AK::IAkPlugin::GetPluginInfoAkPluginInfo::bCanProcessObjects 设置重置为 true 来声明其为对象处理器,并通过相应地设置 AkPluginInfo::bIsInPlace 来声明是否实施原地处理。

警告: 非原地对象处理器仅可添加到总线上。跟总线上的效果器插件一样,其无法改变数据的使用和生成速率(AkPluginInfo::bCanChangeRate 不可为 true)。不过,可更改输出对象及其声道配置。

此处仅介绍与这些接口相关的函数。如需了解与其他插件类型共用的接口组件(AK::IAkPlugin 接口),请参阅 创建声音引擎插件 章节;如需了解与效果器插件共用的功能(如初始化、旁通和监控),请参阅 实现效果器插件接口 章节。

实现原地对象处理器

原地对象处理器在 AK::IAkInPlaceObjectPlugin::Execute 中接收一组 Audio Object 的缓冲区和元数据(分别对应 AkAudioBuffer 和 AkAudioObject)。它们能够读取并修改所有 Audio Object 的音频信号和元数据,但无法创建/移除 Audio Object 或更改其声道配置。

注意,在将原地对象处理器作为效果器插入到 Actor-Mixer Hierarchy 或 Interactive Music Hierarchy 中的对象上时,与插入到总线上不同的是,在 Execute 期间会将 Object Metadata 视为无效,且不会保留任何对 Metadata 所作的修改。对象处理器可通过判断 Object Key 是否等于 AK_INVALID_AUDIO_OBJECT_ID 来对此予以确认,并决定是要妥善处理这一情况还是直接标记错误。

Compressor 就属于原地对象处理器。它本身需要是对象处理器,因为其算法依赖于即时感知所有 Audio Object 的音频信号。不过,它不会修改 Audio Object 列表。它只会修改通过对象发送的音频信号。

void CAkCompressorFX::Execute(
const AkAudioObjects& io_objects // Input objects and object buffers.
)
{
// Downmix Audio Objects' signal into a working buffer "m_estBuffer",
// perform gain estimation from that buffer and compute a gain to apply to each sample of each Audio Object.
// ...
// Apply compressor gain (samples of m_estBuffer) to each sample of each Audio Object.
// For each object
for (AkUInt32 i = 0; i < io_objects.uNumObjects; ++i)
{
// For each channel of this object.
const AkUInt32 uNumChannels = io_objects.ppObjectBuffers[i]->NumChannels();
for (AkUInt32 j = 0; j < uNumChannels; ++j)
{
AkReal32* pInBuf = io_objects.ppObjectBuffers[i]->GetChannel(j);
const AkReal32* pInBufEnd = pInBuf + io_objects.ppObjectBuffers[i]->uValidFrames;
AkReal32* estBuf = m_estBuffer;
while (pInBuf < pInBufEnd)
{
*pInBuf *= *estBuf;
++estBuf;
++pInBuf;
}
}
}
}

Compressor 当然能够处理单个对象。换句话说,它可以应用到传统的基于声道的总线上。因此,我们使用其取代了原有的效果器插件实现。

在原地对象处理器中处理尾音

跟效果器插件一样,原地对象处理器可通过将 AkAudioBuffer::eState 字段由 AK_NoMoreData 改为 AK_DataReady 来按照所需帧数处理尾音。有关更多详细信息,请参阅 Implementing 效果器插件尾音 章节。不过要注意,对象处理器需要单独处理各个 Audio Object 的尾音。因此,其需要追踪每个对象。

注意: 不要存储传给 Execute 的 Audio Object 地址,因为每一帧的地址都可能是不一样的。最好使用 AkAudioObject::key 字段来识别 Audio Object。

实现非原地对象处理器

非原地对象处理器在输入端和输出端管理两组不同的 Audio Object。输入端 Audio Object 依托于主机总线,而输出端 Audio Object 由插件以显式方式创建(选用以下章节所述的两种方法)。在每一帧,都会通过 AK::IAkOutOfPlaceObjectPlugin::Execute 将所有这些对象传给插件。

非原地对象处理器和总线配置

输出端 Audio Object 的声道配置由插件决定。

从本质上来说,非 Audio Object 总线(或称为单一对象总线)算是一种特殊的 Audio Object 总线。只不过,它仅支持一个 Audio Object。对象处理器可以不加区分地接收和生成任意数量的 Audio Object,因而能让单一对象总线根据情况输出不同数量的 Audio Object。反过来,也能让 Audio Object 输出单个 Audio Object,就像传统的混音总线一样。

备注: 在插入到非 Audio Object 总线上时,对象处理器在 AK::IAkEffectPlugin::Init 中接收总线的实际声道配置。因为对象处理器是效果器插件的超集,所以应尽量确保它们能够流畅地工作,无论其是插在 Audio Object 总线上还是非 Audio Object 总线上。也就是说,除非用户错误地将其插入到非 Audio Object 总线上。比如,用户不应将 Software Binauralizer 插件插入到非 Audio Object 总线上,因为下混后的音频不会携带任何有用的定位信息。在这种情况下,最好告知用户其可能执行了错误的操作。

在非原地对象处理器中处理尾音和对象生命周期

声音引擎会在每帧的开头将所有输入端和输出端 Audio Object 的 AkAudioBuffer::eState 重置为 AK_NoMoreData。若处理之后 AkAudioBuffer::eState 仍被设为 AK_NoMoreData,则将相应的 Audio Object 销毁。在销毁所有输入和输出对象之后,才会将对象处理器销毁。因此,要想让非原地对象处理器在已无任何输入端 Audio Object 的情况下继续输出音频,必须通过将状态设为 AK_DataReady 来确保让一个或多个输出端 Audio Object 处于活跃状态。

注意: 在对象处理器内追踪对象时,请务必谨慎。切勿引用已被声音引擎销毁的对象。
备注: 若将 AkAudioBuffer::eState 设为 AK_DataReady,则输入端 Audio Object 将保持活跃状态。不过尽量不要这样做,因为会生成不必要的尾音。
备注: 若没有活跃的输出端 Audio Object,则非原地对象处理器输出无声内容。

非原地对象处理示例

下面我们来以以下三个范例为参考探讨一下非原地对象处理。

Software Binauralizer

Software Binauralizer 可作为非原地对象处理器加以实现,来输入多个 Audio Object 并采用立体声声道配置输出单个 Audio Object。这种插件应添加到 Audio Object 总线上,不过可让该总线输出单个立体声信号。

为方便起见,可通过 AK::IAkEffectPlugin::Init 的握手方法来创建唯一的立体声输出对象。

AK::IAkPluginMemAlloc* in_pAllocator,
AK::IAkPluginParam* in_pParams,
AkAudioFormat& io_rFormat)
{
m_pContext = in_pContext;
// in_rFormat.channelConfig.eConfigType will be different than AK_ChannelConfigType_Objects if the configuration of the input of the plugin is known and does not support a
// dynamic number of objects. However this plug-in is pointless if it is not instantiated on an Audio Object bus, so we are better off letting our users know.
// Inform the host that the output will be stereo. The host will create an output object for us and pass it to Execute.
return AK_Success;
}

然后,在 Execute 中插入以下代码:

void ObjectBinauralizerFX::Execute(
const AkAudioObjects& in_objects, ///< Input objects and object audio buffers.
const AkAudioObjects& out_objects ///< Output objects and object audio buffers.
)
{
AKASSERT(in_objects.uNumObjects > 0); // Should never be called with 0 objects if this plug-in does not force tails.
AKASSERT(out_objects.uNumObjects == 1); // Output config is a stereo channel stream.
// "Binauralize" (just mix) objects in stereo output buffer.
// For the purpose of this demonstration, instead of applying HRTF filters, let's call the built-in service to compute panning gains.
// The output object should be stereo. Clear its two channels.
memset(out_objects.ppObjectBuffers[0]->GetChannel(0), 0, out_objects.ppObjectBuffers[0]->MaxFrames() * sizeof(AkReal32));
memset(out_objects.ppObjectBuffers[0]->GetChannel(1), 0, out_objects.ppObjectBuffers[0]->MaxFrames() * sizeof(AkReal32));
for (AkUInt32 i = 0; i < in_objects.uNumObjects; ++i)
{
// State management: set the output to AK_DataReady as long as one of the inputs is AK_DataReady. Otherwise set to AK_NoMoreData.
if (in_objects.ppObjectBuffers[i]->eState != AK_NoMoreData)
eState = in_objects.ppObjectBuffers[i]->eState;
// Prepare mixing matrix for this input.
const AkUInt32 uNumChannelsIn = in_objects.ppObjectBuffers[i]->NumChannels();
uNumChannelsIn,
2);
AK::SpeakerVolumes::Matrix::Zero(mx, uNumChannelsIn, 2);
// Compute panning gains and fill the mixing matrix.
m_pContext->GetMixerCtx()->ComputePositioning(
in_objects.ppObjects[i]->positioning,
out_objects.ppObjectBuffers[0]->GetChannelConfig(),
mx
);
// Using the mixing matrix, mix the channels of the ith input object into the one and only stereo output object.
AK_GET_PLUGIN_SERVICE_MIXER(m_pContext->GlobalContext())->MixNinNChannels(
in_objects.ppObjectBuffers[i],
out_objects.ppObjectBuffers[0],
1.f,
1.f,
mx, /// NOTE: To properly interpolate from frame to frame and avoid any glitch, we would need to store the previous matrix (OR positional information) for each object.
mx);
}
// Set the output object's state.
out_objects.ppObjectBuffers[0]->uValidFrames = in_objects.ppObjectBuffers[0]->MaxFrames();
out_objects.ppObjectBuffers[0]->eState = eState;
}

3D Panner

3D Panner 可采用与上述 Software Binauralizer 类似的方式实现。不过,最好还是让它实例化一系列输出端 Audio Object,并使之与空间化虚拟话筒一一对应。如此一来,便可由下游总线或设备来对这些 Audio Object 的信号实施声像摆位,以此充分利用这些虚拟话筒的定位元数据。

AK::IAkEffectPlugin::Init 中,请以显式方式使用 AK::IAkEffectPluginContext::CreateOutputObjects 来创建输出对象,而不要返回非对象配置。

static const int k_uNumObjectsOut = 6;
AK::IAkPluginMemAlloc* in_pAllocator,
AK::IAkPluginParam* in_pParams,
AkAudioFormat& in_rFormat
)
{
// Create output objects.
// Desired channel configuration for all new objects: mono.
AkChannelConfig channelConfig;
// AkAudioObjects::uNumObjects, the number of objects to create.
// AkAudioObjects::ppObjectBuffers, Returned array of pointers to the object buffers newly created, allocated by the caller. Pass nullptr if they're not needed.
// AkAudioObjects::ppObjects, Returned array of pointers to the objects newly created, allocated by the caller. Pass nullptr if they're not needed.
AkAudioObject* ppObjects[k_uNumObjectsOut];
AkAudioObjects outputObjects;
outputObjects.uNumObjects = k_uNumObjectsOut;
outputObjects.ppObjects = ppObjects;
outputObjects.ppObjectBuffers = nullptr; // not needed.
AKRESULT res = in_pContext->CreateOutputObjects(
channelConfig,
outputObjects
);
if (res == AK_Success)
{
// Set output objects' 3D positions as if they were laid out as a 5.1 config around the listener.
// FL
ppObjects[0]->positioning.threeD.xform.SetPosition(-0.707f, 0.f, 0.707f);
// Store the objects' keys so we can later retrieve them (see helper below).
m_objectKeys[0] = ppObjects[0]->key;
// FR
//...
}
}
// Helper function: find an object having key in_key in array in_objects.
static AkUInt32 FindOutputObjectIdx(AkAudioObjectID in_key, AkAudioObject** in_objects)
{
for (int i = 0; i < k_uNumObjectsOut; i++)
{
if (in_objects[i]->key == in_key)
return i;
}
AKASSERT(false);
return -1;
}
void Ak3DPannerFX::Execute(
const AkAudioObjects& in_objects, // Input objects and object audio buffers.
const AkAudioObjects& out_objects // Output objects and object audio buffers.
)
{
// Compute panning of each object into a temp buffer tempBuffer.
// ...
// Copy each channel of the temp buffer to its corresponding output object.
AKASSERT(k_uNumObjectsOut == out_objects.uNumObjects);
for (int i = 0; i < k_uNumObjectsOut; i++)
{
// Find corresponding output object.
// In Execute, the order of objects is not reliable. We need to search for each object in the array of output objects, using the helper defined above.
AkUInt32 idx = FindOutputObjectIdx(m_objectKeys[i], out_objects.ppObjects);
// Copy the ith temp buffer's channel into the proper output object.
memcpy(out_objects.ppObjectBuffers[idx]->GetChannel(0), tempBuffer.GetChannel(i), tempBuffer.uValidFrames * sizeof(AkReal32));
// Set the output object's state to avoid garbage collection by the host.
out_objects.ppObjectBuffers[idx]->uValidFrames = tempBuffer.uValidFrames;
out_objects.ppObjectBuffers[idx]->eState = tempBuffer.eState;
}
}

在上述示例中,您可能会好奇为什么 (-0.707f, 0.f, 0.707f) 代表左前位置。有关详细信息,请参阅 关于 3D 转换 章节。

Particle Generator

对于每个输入端 Audio Object,Particle Generator 可创建 N 个输出端 Audio Object,并将其随机定位在对应对象所在位置的周围。这种对象处理器无法在 Init 中创建对象,而需要在 Execute 中动态地加以创建,同时追踪这些对象及与其对应的输入对象。在输入对象的状态为 AK_NoMoreData 时,对应输出对象的状态也要设为 AK_NoMoreData。这样可以确保由声音引擎对对象进行垃圾回收。

注意: 若非原地对象处理器从 Execute 内调用 AK::IAkEffectPluginContext::CreateOutputObjects,则其无法稳定地访问 out_objects 中传递的输出对象。在这种情况下,须使用 AK::IAkEffectPluginContext::GetOutputObjects。
// The plugin needs to maintain a map of input object keys to generated objects. Like so:
struct GeneratedObjects
{
AkUInt32 numObjs;
AkAudioObjectID apObjectKeys[AK_MAX_GENERATED_OBJECTS];
int index; /// We use an index mark each output object as "visited" and map them to input objects (index in the array) at the same time.
};
void ParticleGeneratorFX::Execute(
const AkAudioObjects& in_objects, ///< Input objects and object audio buffers.
const AkAudioObjects& out_objects ///< Output objects and object audio buffers.
)
{
AKASSERT(in_objects.uNumObjects > 0); // Should never be called with 0 objects if this plug-in supports no tail.
// Object bookkeeping.
for (AkUInt32 i = 0; i < in_objects.uNumObjects; ++i)
{
// Find this object in our map.
AkAudioObject * inobj = in_objects.ppObjects[i];
GeneratedObjects * pEntry = m_mapInObjsToOutObjs.Exists(inobj->key);
if (pEntry)
pEntry->index = i; // Found. Note down the index for later.
else
{
// New. Create a new entry and new associated output objects.
pEntry = m_mapInObjsToOutObjs.AddInput(inobj->key);
if (pEntry)
{
AkUInt32 numObjsOut = 1;
{
// If "3D".
// Create between one and AK_MAX_GENERATED_OBJECTS output objects.
AkReal32 fRandom = rand() / ((AkReal32)RAND_MAX);
numObjsOut = (AkUInt32)(fRandom * (AK_MAX_GENERATED_OBJECTS - 1) + 1.f);
}
// Else just create one object.
AkAudioObject ** arNewObjects = (AkAudioObject**)AkAlloca(numObjsOut * sizeof(AkAudioObject*));
AkAudioObjects outputObjects;
outputObjects.uNumObjects = numObjsOut;
outputObjects.ppObjectBuffers = nullptr;
outputObjects.ppObjects = arNewObjects;
if (m_pContext->CreateOutputObjects(in_objects.ppObjectBuffers[i]->GetChannelConfig(), outputObjects) == AK_Success)
{
for (AkUInt32 iObj = 0; iObj < numObjsOut; iObj++)
{
AkAudioObject * pObject = arNewObjects[iObj];
pEntry->apObjectKeys[iObj] = pObject->key;
// Copy the input object's positional metadata, but randomize the actual position.
pObject->positioning.threeD = inobj->positioning.threeD;
// Randomize position and assign to output object.
/// NOTE: By randomizing position now at object creation time and not updating it with inobj->positioning, particles will remain fixed with the listener's head throughout their
/// existence. We could choose to instead store an offset in our map and apply it to inobj at each frame.
AkVector pos = ComputeRandomPosition(inobj->positioning.threeD.xform.Position());
}
pEntry->numObjs = numObjsOut;
pEntry->index = i;
}
}
}
}
// Copy input objects' signal to corresponding output objects. Garbage collect objects (on our side) along the way.
// We cannot use out_objects because we changed the collection of objects during this call!Use GetOutputObjects instead.
// First, query the number of objects.
AkAudioObjects outputObjects;
outputObjects.uNumObjects = 0;
outputObjects.ppObjectBuffers = nullptr;
outputObjects.ppObjects = nullptr;
m_pContext->GetOutputObjects(outputObjects);
if (outputObjects.uNumObjects > 0)
{
// Allocate arrays on the stack and retrieve the output objects.
AkAudioBuffer ** buffersOut = (AkAudioBuffer **)AkAlloca(outputObjects.uNumObjects * sizeof(AkAudioBuffer*));
AkAudioObject ** objectsOut = (AkAudioObject **)AkAlloca(outputObjects.uNumObjects * sizeof(AkAudioObject*));
m_pContext->GetOutputObjects(outputObjects);
// Iterate through our internal map.
while (it != m_mapInObjsToOutObjs.End())
{
// Has the input object been passed to Execute?
if ((*it).pUserData->index >= 0)
{
// Yes. Copy its signal into each of its associated output objects.
AkAudioBuffer* inbuf = in_objects.ppObjectBuffers[(*it).pUserData->index];
const AkUInt32 uNumChannels = inbuf->NumChannels();
for (AkUInt32 out = 0; out < (*it).pUserData->numObjs; out++)
{
// Find output object.
AkAudioBuffer * pBufferOut = nullptr;
AkAudioObject * pObjOut = nullptr;
for (AkUInt32 i = 0; i < outputObjects.uNumObjects; i++)
{
if (objectsOut[i]->key == (*it).pUserData->apObjectKeys[out])
{
pBufferOut = buffersOut[i];
pObjOut = objectsOut[i];
break;
}
}
if (pObjOut)
{
// Copy each channel.
for (AkUInt32 j = 0; j < uNumChannels; ++j)
{
AkReal32* pInBuf = inbuf->GetChannel(j);
AkReal32* outBuf = pBufferOut->GetChannel(j);
memcpy(outBuf, pInBuf, inbuf->uValidFrames * sizeof(AkReal32));
}
// Copy state.
pBufferOut->uValidFrames = inbuf->uValidFrames;
pBufferOut->eState = inbuf->eState;
// Also, since there is a clear association of input objects to output objects, let's propagate the associated input object's custom metadata to the output.
pObjOut->arCustomMetadata.Copy(in_objects.ppObjects[(*it).pUserData->index]->arCustomMetadata);
}
}
(*it).pUserData->index = -1; // "clear" index for next frame.
++it;
}
else
{
// Destroy stale objects.
// Output objects are collected by the host if we don't set their eState explicitly to AK_DataReady.
// However, here we need to get rid of them on our side otherwise our map would grow indefinitely.
it = m_mapInObjsToOutObjs.EraseSwap(it);
}
}
}
}

为 Audio Object 指派名称

在设计工具的 Audio Object Profiler 中,会将发起端 Wwise 对象的名称赋予 Audio Object。相应地,非原地对象处理器的输出对象全部使用其所在总线的名称。为方便实施性能分析,建议在适用情况下使用 AkAudioObject::SetName 来为输出对象指派名称。

比如,在上述 3D Panner 中创建对象时,可按照以下方式进行命名:

// FL
ppObjects[0]->SetName(in_pAllocator, "FL");
//...

Audio Object 元数据

AkAudioObject 结构包含整条对象管线中随 Audio Object 音频缓冲区一并传输的所有 Audio Object 元数据。其可分为三个类别:

  • 标识:AkAudioObject::key、AkAudioObject::instigatorID 和 AkAudioObject::objectName。它们绝对不能由对象处理器写入。只有 objectName 可由对象处理器使用 AkAudioObject::SetName 来设置。
  • 定位信息:AkAudioObject::positioning(参见下文 定位元数据 章节)。
  • 累积增益:AkAudioObject::cumulativeGain(参见下文 Cumulative Gain 元数据 章节)。
  • 自定义元数据插件:AkAudioObject::arCustomMetadata(参见下文 自定义元数据插件 章节)。

定位元数据

Audio Object 在 AkAudioObject::positioning 中携带发起端声音的定位数据。AkAudioObject::positioning.behavioral 保存 Wwise 对象上可设定的所有相关定位设置。比如,若声音使用扬声器声像摆位,则 AkAudioObject::positioning.behavioral.panType 将被设为其中一种声像摆位器类型,panLRpanBFpanDU 将与声像摆位器的位置对应。

若空间化模式为 3D((AK_SpatializationMode_PositionOnlyAK_SpatializationMode_PositionAndOrientation ),则 AkAudioObject::positioning.threeD 包含所有与 3D 位置相关的数据:

  • AkAudioObject::positioning.threeD.xform 一般会沿用关联游戏对象的位置。不过,可依据其 3D 定位设置(如 3D Automation)来针对各个声音加以修改或覆盖。
  • AkAudioObject::positioning.threeD.spreadAkAudioObject::positioning.threeD.focus 一般通过 Attenuation 曲线来进行计算。

Cumulative Gain 元数据

Audio Object 具有上游所应用的累积增益,比如音频源的 Volume 设置或来自总线和总线通路的增益变化。对于没有效果器或对象处理器的简单场景,这意味着在最终下混到扬声器 Bed 或作为 Audio Object 发送到系统输出之前不会将 Audio Object 应用于其音频信号。这样可以避免在一帧中多次调节增益并逐步应用于 Audio Object 的音频信号以确保混音平滑变化,尤其是在因 Game Object 添加或移除位置而创建和销毁 Audio Object 的情况下。

针对此元数据的支持对对象处理器来说是可选的,其可通过在对象处理器对 IAkPlugin::GetPluginInfo 的实现中将 IAkPluginInfo::bUsesGainAttribute 设为 true 来启用。若 bUsesGainAttribute 保留为 false,则所有传给 Execute 的音频缓冲区将具有在执行之前应用于 Audio Object 的累积增益,而传给对象处理器的增益将取中立值。倘若将 bUsesGainAttribute 设为 true,则不会修改音频缓冲区,累积增益可为无单位值。藉此,对象处理器可根据需要确认增益并修改 Audio Object 的累积增益。

若要修改 Audio Object 的累积增益,请注意该值为 AkRamp 且游戏引擎不会自动处理某一帧 fNext 值和下一帧 fPrev 值之间的连贯性。也就是说,若对象处理器要修改某一帧的 fNext,则须将同样的修改应用于下一帧的 fPrev。倘若管理不当,在音频管线的其他部分需要消耗 Audio Object 的增益时,可能会出现毛刺噪声或音频信号断续问题。

关于 3D 转换

若空间化模式 AkAudioObject::positioning.behavioral.spatMode 为3D(AK_SpatializationMode_PositionOnlyAK_SpatializationMode_PositionAndOrientation ),则将对 3D 位置实施转换(变换和旋转)以使其与所在总线关联的游戏对象(听者)相对。比如,声音可以定位在 (2, 0, 0);在由与处于 (10, 0, 0) 的听者关联的 Audio Object 总线处理时,将把生成的 Audio Object 定位在 (-8, 0, 0)。该总线上的对象处理器会将所述 Audio Object 视为处在 (-8, 0, 0) 位置。

Wwise 中采用左手坐标系,默认朝向的前向矢量指向 Z,顶向矢量指向 Y(由 AkCommonDefs.h 定义)。

/// Default listener transform.
#define AK_DEFAULT_LISTENER_POSITION_X (0.0f) // at coordinate system's origin
#define AK_DEFAULT_LISTENER_POSITION_Y (0.0f)
#define AK_DEFAULT_LISTENER_POSITION_Z (0.0f)
#define AK_DEFAULT_LISTENER_FRONT_X (0.0f) // looking toward Z,
#define AK_DEFAULT_LISTENER_FRONT_Y (0.0f)
#define AK_DEFAULT_LISTENER_FRONT_Z (1.0f)
#define AK_DEFAULT_TOP_X (0.0f) // top towards Y
#define AK_DEFAULT_TOP_Y (1.0f)
#define AK_DEFAULT_TOP_Z (0.0f)

比如,对于 Software Binauralizer ,已在到达插件之前对输入对象进行旋转,以使 Z 轴正半轴指向听者前方,X 轴正半轴指向其右方,以此类推。正因如此,示例中使用的 AK::IAkMixerPluginContext::ComputePositioning 服务才不会采用听者的朝向,而是假定为默认朝向。

因此,在上述 3D Panner 示例中,(-0.707, 0, 0.707) 指向与所在总线关联的游戏对象左前 45 度。

自定义元数据插件

对象处理器能够访问与 Audio Object 绑定的自定义元数据。

自定义元数据是种只包含一组参数的插件。在设计工具中,元数据插件可添加到任何 Wwise 对象上,并且支持 ShareSet。在声音引擎中,其实现 AK::IAkPluginParam 接口。

在适用情况下,Audio Object 会在每一帧收集与发起端声音绑定的所有元数据插件,并将其添加到 AkAudioObject::arCustomMetadata 数组。然后,收集与流经的每条总线绑定的所有元数据插件。对象处理器(无论原地还是非原地)可读取此数组。当然,必须知晓插件才能转译其内容。

对象处理器实现类可写入一个或多个配套元数据插件,便于用户添加到 Wwise 对象。

比如,在前述 Software Binauralizer 中(参见 Software Binauralizer 章节),想要支持直通模式,以避免对某些声音实施 HRTF 滤波。为此,可使用名为 Passthrough 的布尔值属性来创建配套元数据插件 ObjectBinauralizerMetadata。这样的话用户就可以将此插件添加到任何想要禁用 HRTF 的 Wwise 对象。然后,在对象处理器的 Execute() 中插入以下代码:

for (AkUInt32 i = 0; i < in_objects.uNumObjects; ++i)
{
// Assume that Passthrough Mode is false if there isn't an ObjectBinauralizerMetadata metadata on the Audio Object.
bool bPassthrough = false;
// Search for it.
for (AkAudioObject::ArrayCustomMetadata::Iterator it = in_objects.ppObjects[i]->arCustomMetadata.Begin(); it != in_objects.ppObjects[i]->arCustomMetadata.End(); ++it)
{
if ((*it).pluginID == AKMAKECLASSID(AkPluginTypeMetadata, MYCOMPANYID, OBJECT_BINAURALIZER_METADATA_ID))
{
// The metadata plugin ID matches the type we are looking for. We can thus safely cast the AK::IAkPluginParam to a known type.
ObjectBinauralizerMetadata * pMetadata = (ObjectBinauralizerMetadata*)(*it).pParam;
bPassthrough = pMetadata->bPassthrough;
}
}
// Do something with bPassthrough
// ...
}
备注: 因为 Audio Object 从访问过的每个 Wwise 对象收集元数据插件,所以单个 Audio Object 上的同一插件类型可能会存在多个实例。在这种情况下,需要由您决定采用怎样的策略,并相应的信息告知用户。
@ AK_UnsupportedChannelConfig
Channel configuration is not supported in the current execution context.
Definition: AkTypes.h:181
#define AK_SPEAKER_SETUP_MONO
1.0 setup channel mask
AkSampleType * GetChannel(AkUInt32 in_uIndex)
Definition: AkCommonDefs.h:432
AkPositioningData positioning
Positioning data for deferred 3D rendering.
Definition: AkAudioObject.h:67
AkForceInline AkChannelConfig GetChannelConfig() const
Definition: AkCommonDefs.h:359
AkUInt64 AkAudioObjectID
Audio Object ID
Definition: AkTypes.h:88
AKRESULT Copy(const AkArray< T, ARG_T, TAlloc, TGrowBy, TMovePolicy > &in_rSource)
Definition: AkArray.h:867
AkAudioObjectID key
Unique ID, local to a given bus. Only the lower 56 of 64 bits are used for the object itself....
Definition: AkAudioObject.h:65
const AkVector & Position() const
Get position vector.
Definition: AkTypes.h:556
AkForceInline AkUInt32 NumChannels() const
Get the number of channels.
Definition: AkCommonDefs.h:348
AKRESULT
Standard function call result.
Definition: AkTypes.h:134
AkUInt32 uChannelMask
Channel mask (configuration).
AkForceInline AkUInt32 GetRequiredSize(AkUInt32 in_uNumChannelsIn, AkUInt32 in_uNumChannelsOut)
Compute size (in bytes) required for given channel configurations.
@ AK_NoMoreData
No more data is available from the source.
Definition: AkTypes.h:147
#define AkAllocaSIMD(_size_)
AkChannelConfig channelConfig
Channel configuration.
Definition: AkCommonDefs.h:63
AkForceInline void Zero(MatrixPtr in_pVolumes, AkUInt32 in_uNumChannelsIn, AkUInt32 in_uNumChannelsOut)
Clear matrix.
AKSOUNDENGINE_API AKRESULT Init(const AkCommSettings &in_settings)
@ AK_ChannelConfigType_Objects
Object-based configurations.
float AkReal32
32-bit floating point
@ AK_Success
The operation was successful.
Definition: AkTypes.h:136
AKRESULT eState
Execution status
Definition: AkCommonDefs.h:507
#define AK_GET_PLUGIN_SERVICE_MIXER(plugin_ctx)
Definition: IAkPlugin.h:1907
AkUInt16 uValidFrames
Number of valid sample frames in the audio buffer
Definition: AkCommonDefs.h:513
AKRESULT SetName(AK::IAkPluginMemAlloc *in_pAllocator, const char *in_szName)
#define AKMAKECLASSID(in_pluginType, in_companyID, in_pluginID)
Definition: AkCommonDefs.h:178
USER_DATA * Exists(KEY in_key)
Returns the user data associated with given input context. Returns NULL if none found.
AkAudioBuffer ** ppObjectBuffers
Array of pointers to audio object buffers.
USER_DATA * AddInput(KEY in_key)
Adds an input with new user data.
#define AKASSERT(Condition)
Definition: AkAssert.h:67
AkUInt32 uNumObjects
Number of audio objects.
ArrayCustomMetadata arCustomMetadata
Array of custom metadata, gathered from visited objects. Note: any custom metadata is expected to exi...
Definition: AkAudioObject.h:97
#define AkAlloca(_size_)
Stack allocations.
AkBehavioralPositioningData behavioral
Positioning data inherited from sound structures and mix busses.
Definition: AkCommonDefs.h:279
AkArray< AkInputMapSlot< KEY, USER_DATA >, const AkInputMapSlot< KEY, USER_DATA > &, AkPluginArrayAllocator >::Iterator EraseSwap(typename AkArray< AkInputMapSlot< KEY, USER_DATA >, const AkInputMapSlot< KEY, USER_DATA > &, AkPluginArrayAllocator >::Iterator &in_rIter)
Erase the specified iterator in the array. but it does not guarantee the ordering in the array.
Iterator Begin() const
Returns the iterator to the first item of the array, will be End() if the array is empty.
Definition: AkArray.h:346
Ak3dData threeD
3D data used for 3D spatialization.
Definition: AkCommonDefs.h:278
void SetPosition(const AkVector &in_position)
Set position.
Definition: AkTypes.h:614
A collection of audio objects. Encapsulates the audio data and metadata of each audio object in separ...
AkForceInline void SetStandard(AkUInt32 in_uChannelMask)
Set channel config as a standard configuration specified with given channel mask.
AkReal32 * MatrixPtr
Volume matrix. Access each input channel vector with AK::SpeakerVolumes::Matrix::GetChannel().
@ AK_SpatializationMode_None
No spatialization
Definition: AkTypes.h:1161
uint32_t AkUInt32
Unsigned 32-bit integer
AkUInt32 eConfigType
Channel config type (AkChannelConfigType).
virtual AKRESULT CreateOutputObjects(AkChannelConfig in_channelConfig, AkAudioObjects &io_objects)=0
3D vector for some operations in 3D space. Typically intended only for localized calculations due to ...
Definition: AkTypes.h:369
#define AK_SPEAKER_SETUP_STEREO
2.0 setup channel mask
AkMixerInputMap: Map of inputs (identified with AK::IAkMixerInputContext *) to user-defined blocks of...
Defines the parameters of an audio buffer format.
Definition: AkCommonDefs.h:60
@ AK_SpatializationMode_PositionAndOrientation
Spatialization based on both emitter position and emitter orientation.
Definition: AkTypes.h:1163
AkTransform xform
Object position / orientation.
Definition: AkCommonDefs.h:245
AkAudioObject ** ppObjects
Array of pointers to audio objects.
Ak3DSpatializationMode spatMode
3D spatialization mode.
Definition: AkCommonDefs.h:270
@ AkPluginTypeMetadata
Metadata plug-in: applies object-based processing to audio data
Definition: AkTypes.h:1196
AkForceInline AkUInt16 MaxFrames() const
Definition: AkCommonDefs.h:500

此页面对您是否有帮助?

需要技术支持?

仍有疑问?或者问题?需要更多信息?欢迎联系我们,我们可以提供帮助!

查看我们的“技术支持”页面

介绍一下自己的项目。我们会竭力为您提供帮助。

来注册自己的项目,我们帮您快速入门,不带任何附加条件!

开始 Wwise 之旅