Creating a New Project
First, we need to create a new plug-in project with wp.py tools. See Creating a new Plug-in for more details about new
arguments.
python "%WWISEROOT%/Scripts/Build/Plugins/wp.py" new --effect -a author_name -n Lowpass -t "First Order Lowpass" -d "Simple First Order Lowpass Filter"
cd Lowpass
We will target the Authoring platform on Windows, so let's call premake:
python "%WWISEROOT%/Scripts/Build/Plugins/wp.py" premake Authoring
Solutions have been created for building the SoundEngine and the Authoring (WwisePlugin) parts.
We can now build our plug-in and confirm that it loads in Wwise.
python "%WWISEROOT%/Scripts/Build/Plugins/wp.py" build -c Release -x x64 -t vc160 Authoring
Implementing the Filtering Process
Now, we want to add some processing to make our plug-in a little bit more useful. We will implement a simple first order lowpass filter, using this equation:
y[n] = x[n] + (y[n-1] - x[n]) * coeff
where coeff
is a floating-point value between 0 and 1.
First, let's create a couple of variables in SoundEnginePlugin/LowpassFX.h
to hold our filter's data.
#include <vector>
private:
std::vector<AkReal32> m_previousOutput;
The variable m_coeff
is our filter coefficient as a floating-point value, it will be used for all sound channels. The vector m_previousOutput
will hold the last output value of all channels, mandatory to compute the next values of the filter.
To implement the filtering effect, all we have to do is to initialize the coefficient variable, adjust the size of the vector according to the number of channels and then process every sample with the previous formula.
In SoundEnginePlugin/LowpassFX.cpp
:
LowpassFX::LowpassFX()
:
, m_coeff(0.99f)
AKRESULT LowpassFX::
Init(
AK::IAkPluginMemAlloc* in_pAllocator,
AK::IAkEffectPluginContext* in_pContext,
AK::IAkPluginParam* in_pParams,
AkAudioFormat& in_rFormat)
{
m_previousOutput.resize(in_rFormat.GetNumChannels(), 0.0f);
}
{
for (
AkUInt32 i = 0; i < uNumChannels; ++i)
{
uFramesProcessed = 0;
while (uFramesProcessed < io_pBuffer->uValidFrames)
{
m_previousOutput[i] = pBuf[uFramesProcessed] =
pBuf[uFramesProcessed] + (m_previousOutput[i] - pBuf[uFramesProcessed]) * m_coeff;
++uFramesProcessed;
}
}
}
Using an RTPC Parameter to Control the Filter's Frequency
At this point, our filter is pretty boring because there is no way to interact with it. The next step is to bind an RTPC parameter to the filter's frequency so that we can change its value in real time. There are four changes to make to allow our plug-in to use an RTPC parameter.
First, we must add its definition in WwisePlugin/Lowpass.xml
. There is already a parameter skeleton, named "PlaceHolder", in the plug-in template. We will use it to define a "Frequency" parameter. In WwisePlugin/Lowpass.xml
, replace the placeholder property with this:
<Property Name="Frequency" Type="Real32" SupportRTPCType="Exclusive" DisplayName="Cutoff Frequency">
<UserInterface Step="0.1" Fine="0.001" Decimals="3" UIMax="10000" />
<DefaultValue>1000.0</DefaultValue>
<AudioEnginePropertyID>0</AudioEnginePropertyID>
<Restrictions>
<ValueRestriction>
<Range Type="Real32">
<Min>20.0</Min>
<Max>10000.0</Max>
</Range>
</ValueRestriction>
</Restrictions>
</Property>
Second, we need to update LowpassFXParams.h
and LowpassFXParams.cpp
in the SoundEnginePlugin folder to reflect our property changes.
In LowpassFXParams.h
, update the parameter IDs and the name of the parameter in the LowpassRTPCParams
structure.
struct LowpassRTPCParams
{
};
Update LowpassFXParams.cpp
as well:
{
if (in_ulBlockSize == 0)
{
RTPC.fFrequency = 1000.0f;
}
}
AKRESULT LowpassFXParams::SetParamsBlock(
const void* in_pParamsBlock,
AkUInt32 in_ulBlockSize)
{
}
{
case PARAM_FREQUENCY_ID:
RTPC.fFrequency = *((
AkReal32*)in_pValue);
}
Third, in the WwisePlugin
folder, we need to update the Lowpass::GetBankParameters
function to write the "Frequency" parameter in the bank:
bool LowpassPlugin::GetBankParameters(const GUID & in_guidPlatform, AK::Wwise::Plugin::DataWriter* in_pDataWriter) const
{
in_pDataWriter->WriteReal32(m_propertySet.GetReal32(in_guidPlatform, "Frequency"));
return true;
}
Finally, in our processing loop, we want to use the current frequency to compute the filter's coefficient with this formula:
coeff = exp(-2 * pi * f / sr)
We need to retrieve the current sampling rate,
include some math symbols:
#include <cmath>
#ifndef M_PI
#define M_PI 3.14159265359
#endif
and compute the filter coefficient:
{
m_coeff =
static_cast<AkReal32>(exp(-2.0 * M_PI * m_pParams->RTPC.fFrequency / m_sampleRate));
}
Interpolating Parameter Values
It is often not enough to update a processing parameter once per buffer size (the number of samples in an audio buffer channel, usually between 64 and 2048). Especially if this parameter affects the frequency or the gain of the processing; updating the value too slowly can produce zipper noise or clicks in the output sound.
A simple solution to this problem is to linearly interpolate the value over the whole buffer. Here is how we can do this for our frequency parameter.
Just before computing a new frame of audio samples, i.e., at the top of the Execute
function in LowpassFX.cpp
, we will check if the frequency parameter has changed. To do so, we just ask the AkFXParameterChangeHandler
object in the LowpassFXParams
class. If the frequency has changed, we compute the variables of the ramp:
|
참고: The member variable uValidFrames of the AkAudioBuffer object represents the number of valid samples per channel contained in the buffer. |
{
AkReal32 coeffBegin = m_coeff, coeffEnd = 0.0f, coeffStep = 0.0f;
if (m_pParams->m_paramChangeHandler.HasChanged(PARAM_FREQUENCY_ID))
{
coeffEnd =
static_cast<AkReal32>(exp(-2.0 * M_PI * m_pParams->RTPC.fFrequency / m_sampleRate));
coeffStep = (coeffEnd - coeffBegin) / io_pBuffer->
uValidFrames;
}
}
With this data, all we have to do is to increase coeffBegin
by coeffStep
for each sample of the frame. We need to do this for each channel of the in/out buffer.
{
for (
AkUInt32 i = 0; i < uNumChannels; ++i)
{
coeffBegin = m_coeff;
uFramesProcessed = 0;
while (uFramesProcessed < io_pBuffer->uValidFrames)
{
m_previousOutput[i] = pBuf[uFramesProcessed] =
pBuf[uFramesProcessed] + (m_previousOutput[i] - pBuf[uFramesProcessed]) * coeffBegin;
coeffBegin += coeffStep;
++uFramesProcessed;
}
}
m_coeff = coeffBegin;
}
Now that we have a basic functional plug-in implementing a simple lowpass filter with real-time control over the cutoff frequency, let's talk about design concerns.
Encapsulating the Processing
At this point, all our signal processing logic is written inside the plug-in main class. This is not a good design pattern for many reasons:
- It's bloating our main plug-in class, and it will quickly become worse as we add new processing to build a complex effect.
- It will be hard to reuse our filter if we need it for another plug-in, and, especially with this kind of basic processing unit, it's going to happen!
- It does not respect the single responsibility principle.
Let's refactor our code to encapsulate the filter processing in its own class. Create a file FirstOrderLowpass.h
in the SoundEnginePlugin
folder with this defintion:
#pragma once
class FirstOrderLowpass
{
public:
FirstOrderLowpass();
~FirstOrderLowpass();
void SetFrequency(
AkReal32 in_newFrequency);
private:
bool m_frequencyChanged;
};
And add the implementation in a file called FirstOrderLowpass.cpp:
#include "FirstOrderLowpass.h"
#include <cmath>
#ifndef M_PI
#define M_PI 3.14159265359
#endif
FirstOrderLowpass::FirstOrderLowpass()
: m_sampleRate(0)
, m_frequency(0.0f)
, m_coeff(0.0f)
, m_previousOutput(0.0f)
, m_frequencyChanged(false)
{
}
FirstOrderLowpass::~FirstOrderLowpass() {}
void FirstOrderLowpass::Setup(
AkUInt32 in_sampleRate)
{
m_sampleRate = in_sampleRate;
}
void FirstOrderLowpass::SetFrequency(
AkReal32 in_newFrequency)
{
if (m_sampleRate > 0)
{
m_frequency = in_newFrequency;
m_frequencyChanged = true;
}
}
{
AkReal32 coeffBegin = m_coeff, coeffEnd = 0.0f, coeffStep = 0.0f;
if (m_frequencyChanged)
{
coeffEnd =
static_cast<AkReal32>(exp(-2.0 * M_PI * m_frequency / m_sampleRate));
coeffStep = (coeffEnd - coeffBegin) / in_uValidFrames;
m_frequencyChanged = false;
}
while (uFramesProcessed < in_uValidFrames)
{
m_previousOutput = io_pBuffer[uFramesProcessed] =
io_pBuffer[uFramesProcessed] + (m_previousOutput - io_pBuffer[uFramesProcessed]) * coeffBegin;
coeffBegin += coeffStep;
++uFramesProcessed;
}
m_coeff = coeffBegin;
}
Then, all we have to do in our main plug-in class is to create a vector of FirstOrderLowpass
objects (one per audio channel), call their Setup
function and start using them.
#include "FirstOrderLowpass.h"
#include <vector>
private:
std::vector<FirstOrderLowpass> m_filter;
{
for (
auto & filterChannel : m_filter) { filterChannel.Setup(in_rFormat.
uSampleRate); }
}
{
if (m_pParams->m_paramChangeHandler.HasChanged(PARAM_FREQUENCY_ID))
{
for (auto & filterChannel : m_filter) { filterChannel.SetFrequency(m_pParams->RTPC.fFrequency); }
}
for (
AkUInt32 i = 0; i < uNumChannels; ++i)
{
}
}