Wwise SDK 2024.1.1
|
All audio properties that support multiple RTPC curves and most properties from the Wwise plug-ins can now be controlled by States. To add new State properties to an audio object or plug-in, simply open the State Properties view and select the properties as seen below. This selection is done on a per object or plug-in basis, so you don’t waste space with all possible properties used by your project.
A new State icon has been added next to the link/unlink and RTPC icons to help you identify which properties can be modified by States.
It’s now possible to quickly change which State Group is listening to State changes. This makes faders move when States are changing on motorized controllers. It’s also possible to expand and collapse State Groups (and other main categories, such as Monitoring, Positioning, and Effects), which is convenient when Mixing Desk sessions contain multiple State Groups.
Development in Spatial Audio continued during 2017.2 with a focus on expanding the existing features by enhancing usability, runtime efficiency, and flexibility. Here’s an overview of the main elements:
Built-in Low-pass and High-pass filters have been reworked in the sound engine's audio graph to better model filter values coming from different features such as Wwise user parameters, Attenuation, Occlusion and Obstruction. Previously, Wwise had a single filter on the voice output, and competing parameter values for this filter would have to be logically combined into a single value; only the minimum value was used. Wwise 2017.2 features individual filters on each unique output, including outputs from busses, so that values pertaining to different rays or output busses no longer have to be combined.
An example of how this is useful can be seen when using multiple listener scenarios. A game object that has multiple listeners will have different Attenuation curve evaluations for each listener, since they may be at different distances. The curve defines a Low-pass filter value; however, in previous versions of Wwise, only one Low-pass filter value could be used per voice. Now, upon mixing the single voice into each listener's output bus instance, the correct filter value (as determined by the curve evaluation) will be applied for each output bus.
Busses have three new controls: Output Bus Volume, Output Bus LPF, and Output Bus HPF, allowing one to filter the output of mix busses. Also, low-pass and high-pass filtering of user-defined sends is now possible via RTPC.
Packaged with the plug-in, all ShareSets and stereo impulse responses included with the original Wwise Convolution Reverb now come with their ambisonics equivalents. Projects that already licensed the Convolution Reverb can simply get access to the ambisonics versions of the impulse responses from the "Import Factory Assets..." menu in Wwise (you may need to first download them from the Audiokinetic Launcher).
Audio output management and motion have been refactored to offer greater flexibility, and represent the foundation for future improvements for output management and support of haptic devices.
The motion system used by Wwise Motion to support rumble on game controllers has been refactored. Instead of using a specific code path, it now uses the same feature set and API as the audio. This simplified model allows support for third-party haptic plug-ins for devices such as VR kits or mobile platforms.
A series of feature requests from early adopters of WAAPI have been added to 2017.2. Here are a few examples!
The Spatial Audio suite is now fully integrated in the Unity integration. There’s also a step-by-step tutorial to help you discover its functionality!
The Ak Emitter Obstruction Occlusion component is applied to emitters and offers a basic ray-casting system to occlude or obstruct sounds. The presence or lack of the Ak Room component within the scene determines whether occlusion or obstruction should be used:
While its system might be too elementary for certain games, it should be useful to many other projects needing a simple and straightforward mechanism to manage occlusion and obstruction.
It’s now possible to play back from anywhere in the timeline to allow, for example, more control when editing in-game cinematics. Audio scrubbing is also supported, which can be helpful when syncing audio to video.
A manual copy of SoundBanks in the StreamingAssets folder is no longer required. There is a pre-build processing step now that generates and copies SoundBanks to their appropriate location for the Unity build pipeline.
MIDI Events can now be posted to Wwise via C# scripts. Further, the Wwise Audio Input source plug-in is now accessible via C# scripts.
It is now possible to preview sounds from the Inspector view without entering Play Mode.
There are significant improvements in the Unreal Sequencer to support audio scrubbing, seeking inside tracks, and waveform display. These improvements should be particularly useful when editing in-game cinematics and linear or interactive VR experiences.
AkComponents can now support more than one listener. Further, a listener, which follows the focused viewport's camera position, has been added to the Unreal Editor (when not in Play in Editor mode). It can be used to, for example, preview sounds and distance attenuation directly from the Animation Editor.
Questions? Problems? Need more info? Contact us, and we can help!
Visit our Support pageRegister your project and we'll help you get started with no strings attached!
Get started with Wwise