Wwise SDK 2023.1.8
|
The first thing you'll notice in Wwise 2022.1 is the tab-based user interface changes made to improve the editing workflow and navigation. These improvements represent the continued steps towards providing a comprehensive interactive audio solution with creative depth and accessibility. This release brings a new feel to the way you interact with your Wwise project and expands on the language of interactive audio implementation and dynamic audio authoring.
An Object Tab Group is now included as the center-focus of the Designer layout by default. Any object in the Project Explorer can be selected and displayed in an Object Tab, presenting its properties and any associated editors in one central location. This means you don't have to switch layouts to access the Music Editor, for example, when working with objects in the Interactive Music Hierarchy; everything you need to modify the properties of an object is available to you within the Object Tab.
Object Tabs provide:
There are two types of Object Tab: Recycle and Keep Open. A Recycle tab replaces the previously selected object in the same object tab each time a new object is selected in the Project Explorer. Alternatively, tabs can be kept open independent of the Recycle tab, and can be rearranged, docked and undocked from a tab group bringing greater flexibility to extend workflows and adapt to a project's needs. This feature lets you organize groups of tabs that correspond with different operations, for comparison across tab groups, or in support of authoring processes that might be unique to your development.
An Object Tab Group. | |
Three sample Object Tabs. |
Each Object Tab consists of a Primary Editor in the upper pane and, for most object types, a Secondary Editor in the lower pane. The Primary Editor presents either the Property Editor, Event Editor, Audio Device Editor, or other associated editor depending on the type of object selected. The Secondary Editor includes a tab for each of the other editors that are most relevant to the currently selected object. The combined inclusion of an object's associated primary and secondary editors as part of the tab-based workflow brings the full context of editing into focus.
Refer to Working with Object Tabs and Object Tab Groups for further details on this new workflow.
Layouts and the management of layouts have been updated in alignment with changes to the editing workflow. The Interactive Music layout has been removed since all of the music-related editors are now easily accessed in the Designer layout's Object Tab. Both Object Tab Groups and Object Tabs can be detached from a layout as floating windows and can be docked within a layout or Object Tab Group in the same way as other views. Docking targets and previews have been improved to give a greater understanding of docking results before modifying a layout. Additionally, there have been improvements to the scaling of views when changing the size of the authoring window.
The Project Explorer Search allows for natural search and filtering within the Project Explorer view and also supports the Wwise Authoring Query Language (WAQL). This feature is particularly useful for large projects with many objects and multiple contributors. It helps focus navigation in the Project Explorer by highlighting and expanding matches.
Simply type or drag object(s) into the search field at the top of the Project Explorer to see the results. Refer to Using the Project Explorer Search for further details.
The properties of one object can now be copied to another object or objects using the Paste Properties view. This makes it easy to keep properties consistent across a project, reducing the potential for unknown exceptions and lending peace of mind. The Paste Properties view also provides a place to compare properties between a source and target object(s).
There are two types of information that you can copy and paste using this new view:
For example, you might want to use the same list of RTPCs across different vehicles or physics objects.
The Wwise Authoring API (WAAPI) has been extended to include the ability to return the properties and lists that differ between a source and one target (ak.wwise.core.object.diff) and to paste properties and lists from a source to one or more targets (ak.wwise.core.object.pasteProperties). This opens up the possibility to accelerate your development pipeline by performing Paste Property operations as part of a custom WAAPI workflow (including RTPCs).
Refer to Copying and Pasting Object Properties for details on using this view.
Optional curves for obstruction, occlusion, diffraction, and transmission have been added to attenuations allowing per-attenuation customization.
These curves can either:
This provides the flexibility to fine-tune the curves for these properties on a per-Attenuation basis in order to achieve a comprehensive vision for Spatial Audio.
At the same time, the Attenuation Editor has been rearranged to match the layout of the RTPC tab in the Property Editor:
Auto-defined SoundBanks bring a new asset management strategy to game engine integrations by enabling an Event-based SoundBank methodology within Wwise authoring. This opt-in feature works in conjunction with a game engine integration to dynamically load and unload Wwise resources at runtime ensuring that only the Events and Media requested by the game are loaded into memory. Wwise allows both auto-defined and user-defined Soundbanks to coexist, letting developers make optimization choices about SoundBanks to suit their needs.
Additionally, a host of other workflow improvements to support an automated, granular approach to game-ready asset management have been developed including:
This feature builds on our experience with the Event-Based Packaging (EBP) workflow released as part of the Wwise Unreal Integration and has allowed for the update of EBP to leverage this new methodology. By originating Event-based SoundBanks within Wwise authoring, the flexibility of the EBP workflow becomes independent of the Unreal pipeline and allows for future growth across other game engine integrations.
The File Manager now includes a Generated Files tab, which displays information about the files in the output folder(s) of your project. The new tab lets you view the generated files for each of your project's platforms, one platform at a time. You can determine the status and owner of generated files, as well as which files are read-only. You can also move or delete generated files without leaving the File Manager.
If you're using a source control plug-in, right-clicking any file in the list to access an additional set of commands, such as Submit Changes and Check out.
The Transport Control view is now more compact:
The Property Help view has been renamed Contextual Help view to reflect its newly expanded purpose. You can now select any error message in the Capture Log to see a detailed description, probable causes, and resolution steps in the Contextual Help view.
The view continues to provide help entries when interacting with any field, check box, button, or slider, as long as it sets a property rather than opening a dialog, list, or view.
All content in the Contextual Help view is available in English, Japanese, Simplified Chinese, and Korean, depending on the Documentation Language selected in the Help menu.
The accumulation method used for Voice LPF and Voice HPF can now be defined in the Project Settings. This allows you to choose whether to arrive at the final value by summing all values (previously the only method available) or by taking the highest of all object values.
In this example, each object specifies a value for the Low-pass filter property. The Low-pass filter value applied to the sound depends on the accumulation method chosen.
Sum All Values: The Low-pass filter property values of each object are summed. | |
Use Highest Value: The highest of all of the object Low-pass filter property values is used. |
In the States tab of the Property Editor, you can now copy and paste State values between different State Groups and between different object types, including between the Master-Mixer, Actor-Mixer, and Interactive Music Hierarchies. You can even select multiple target objects and paste values to all target objects with one click.
You can also copy State Groups, along with their associated States, properties, and State values, from one object to another, regardless of object type or hierarchy.
The Target field of the Event Action Set State now indicates both the State and its State group. This makes it easier to distinguish between identically named States in different groups.
The Performance Monitor default settings have been updated to include all of the most useful graph settings.
Selection Channels have replaced Sync Groups and allow you to sync views in either of the following ways:
Meanwhile, to avoid confusion, meters no longer have Sync Groups, they have Instances (A, B, C, and D).
Mouse wheel behavior has been updated in the following locations for a better experience:
In general, for time-based and range-based views, the following conventions have been adopted:
The following keyboard shortcuts have been changed:
The shortcut menus in the Contents Editor, List View, Query Editor, and Reference View have been updated and now include options to:
The overall performance of WAQL has been improved, along with the following additions:
The following features were added to WAAPI:
The Reflect plug-in has a new simplified workflow, making it easier to get started with the creative use of dynamic early reflections. A sound's attenuation curves can now be leveraged to attenuate reflection paths, affording new customization opportunities. Curves can be inherited from a sound's attenuation curves, customized, or disabled on a per-curve basis. Additionally, the Distance and Diffraction curves for each instance of the Reflect plug-in can be warped, giving you creative control to emphasize or de-emphasize reflections.
The new 3D Audio Bed Mixer plug-in can be inserted on an Audio Objects bus to reduce the number of Audio Objects passing through the bus. It works by mixing some of the Audio Objects, resulting in three possible outputs: a main mix, a passthrough mix, and a collection of unmixed Audio Objects that are eligible for promotion to System Audio Objects at the end of the pipeline. Several settings allow you to customize the behavior of the plug-in.
The Time Stretch plug-in includes a new, higher quality stretch mode that preserves transients and lets you control quality vs CPU performance. The Time Stretch plug-in can be used to change the speed or duration of an audio signal without affecting its pitch. It allows for both time stretching and time compression and is suitable for use on monophonic as well as polyphonic sounds. Additionally, properties to control the pitch shift and random pitch shift have been added along with stereo processing modes to better handle stereo imaging and increase the flexibility of Time Stretch as a creative tool.
Event-Based Packaging in the Wwise Unreal Integration now leverages the auto-defined SoundBanks feature, new to Wwise 2022.1. The content of SoundBank metadata files has been improved to include all the information required by the runtime engine to playback the sound as intended by the designers. This allows us to forgo all asset synchronization code from our Unreal integration as it can rely on the metadata generated by Wwise to be valid. This means that instead of populating the Unreal Content folder automatically with all the items present in the Wwise project, Unreal assets are only created when they are needed. Unless you choose to use a specific Event in your Unreal project, it will not create an asset on disk. These assets only contain a reference to the resources they use until you package your project. Only then the SoundBanks are copied to the Unreal staging folder hierarchy. This also solves the problem encountered by project members who do not work on audio, who sometimes had to install Wwise to have audio in their project. The new integration only requires the presence of the GeneratedSoundbanks folder to have full Wwise audio support in the project.
Large worlds are now fully supported with double-precision (64-bit) vectors. Previously it was necessary to either work in 32-bit precision, reposition the center of the universe, or use customized Local Grids. Continuous updates will be provided for Unreal Engine 5.0 releases. For more information, refer to the Important Migration Notes 2022.1.
When implementing audio in a game or simulation that uses a third-person perspective (TPP), it’s not always obvious where to place the Listener Game Object; while some would suggest the position of the camera, others prefer the position of the character controlled by the player. The new Distance Probe, available as an optional counterpart to a Listener Game Object, offers the best of both worlds.
When a Distance Probe is assigned to a Listener, the attenuation distance applied to all sounds routed to the Listener is based on the distance between the Distance Probe and the Emitter Game Object. Panning, spatialization, spread, and focus are always based on the position and orientation of the Listener Game Object, regardless of whether or not a Distance Probe is assigned. With this decoupling of distance using the Distance Probe, you can dynamically control what is heard by the player in different gameplay scenarios.
Additionally:
Spatial Audio is a rapidly evolving feature set that is continuously optimized to ensure the creative interaction of performance and believable environmental representation. Computing diffraction requires finding paths along diffraction edges and can consume a lot of CPU at runtime. In order to optimize CPU performance, we now create a visibility map from diffraction edge to diffraction edge using a ray-tracing approach that collects important neighboring diffraction edges. These can then be used to optimize the diffraction calculations. Additionally, we spread the spatial audio computation load across several audio frames (defined by the spread) to reduce CPU peaks. Each spatial audio task (ray-casting, path validation, etc.) is placed in a priority queue that, based on the load balancing spread setting, executes a specific number of tasks from the queue at each frame. Load balancing is particularly beneficial when using many emitters simultaneously.
The Wwise sound engine now supports large world coordinates for Game Objects by moving to double-precision values for positioning. This allows the positioning of sounds to behave predictably, even with worlds that are billions of units in size. APIs dealing with Game Object positions have been updated to use two new types, AkWorldTransform and AkVector64, which provide the additional data. For more information, refer to Important Migration Notes 2022.1.
Managing the growing number of parameters used to create dynamic audio at runtime just got faster and more efficient. The performance of Real Time Parameter Controls (RTPCs) and Switches is now unaffected by the number of Wwise objects loaded in memory and their use of RTPCs or Switches. Setting the RTPC or Switch value and updating it has been optimized and now only depends on how many registered game objects and active sounds are using the RTPC or Switch.
Multithreaded execution of the audio rendering pipeline has been rewritten and optimized to achieve greater runtime performance. This was most notably achieved by moving to an internal job-based scheduler with modeling of dependencies, instead of a fork-and-join method of parallel execution. Key highlights are:
This means that Wwise uses CPU resources more effectively to deliver a richer audio experience, while also providing better coordination of work and scheduling with the game engine.
Questions? Problems? Need more info? Contact us, and we can help!
Visit our Support pageRegister your project and we'll help you get started with no strings attached!
Get started with Wwise