Version

menu_open
Warning: you were redirected to the latest documentation corresponding to your major release ( 2023.1.8.8601 ). Should you wish to access your specific version's documentation, please download the offline documentation from the Audiokinetic Launcher and check the Offline Documentation option in Wwise Authoring.
Wwise SDK 2023.1.8
New Features Overview 2023.1

Wwise 2023.1 expands on functionality at the core of the Wwise interactive audio pipeline and continues building towards greater affordance to achieve your audio vision. There are a number of features that bring the editing workflow into sharper focus, as the user interface continues to evolve, with the Wwise Authoring API (WAAPI) growing to support Wwise workflows. Meanwhile, Spatial Audio continues to improve in an approach that begins with science, applied as detailed spatial intelligence at runtime, in conjunction with increased usability. Alongside these changes are the fundamental building blocks that will allow Wwise to continue growing in support of anyone reaching for a comprehensive interactive audio solution.

Spatial Audio

3D Audio

Wwise added native support for 3D Audio with the introduction of the object-based pipeline in 2021.1, along with the creation of workflows to help guide and adapt mixes for platforms that can leverage the precision of Spatial Audio. With the ability to inform and route sounds with positioning precision, the mix can be adapted across any channel configuration, including headphone binauralization, with high accuracy. Wwise 2023.1 continues to enhance adaptability and compatibility: it adds Apple and Android devices to the list of platforms that support Spatialization along with Windows, Xbox, and PlayStation.

Dolby Wwise Partnership

This year, Audiokinetic has partnered with Dolby to continue to bring Wwise users a standardized way to produce and deliver spatial interactive audio content for games, which is adaptive across devices and platforms. Refer to the press release for more information about what's on the horizon for Wwise and Dolby.

Apple Spatial Audio Support

3D Audio Support for Apple platforms has been added to the System Audio Device. Mac and iOS devices can now leverage Apple Spatial Audio at runtime.

Android Spatial Audio Support

3D Audio Support for Android platforms has been added to the System Audio Device. Android Devices that include the Android Spatializer can now leverage 3D Audio at runtime.

Acoustic Simulation

The desire to bring more dynamic audio realism, informed by worlds players are immersed in, continues to expand. The purpose of Wwise Spatial Audio is to convince players that the sounds originate from these worlds. To do so, Spatial Audio provides a combination of technology, creativity, and accessibility: technical accuracy based on a real-world understanding of acoustic phenomena, the ability to creatively manipulate environmental audio aspects to serve the aesthetic needs of the experience, and continued efforts to simplify the Spatial Audio feature so that anyone can use it.

The introduction of Continuous Raytracing, Room Containment, improved Portal reflections, revised Room Aux Send Model, Reverb Zones, and Reflect decorrelation help users meet and exceed their real-time spatial acoustic needs. These features continue to deliver the latest simulation technology and acoustic simulation accuracy. They also demonstrate our commitment to growth in the direction of Wwise users, their needs, their abilities, and their desire to represent experiences realistically and finely tuned to their creative visions.

Continuous Ray Casting

Ray casting provides information to drive the environmental sound influences of game geometry. The ability to understand the spatial aspects that surround the listener means that Wwise can represent sounds that use Spatial Audio with precision. The rays inform the obstruction, occlusion, transmission, and diffraction applied to sounds when objects or geometry are present. In the previous implementation, a high number of primary rays were cast only when the listener movement exceeded a predefined threshold. When these rays were cast on the same frame, a sudden load increase on the frame would result in occasional CPU peaks. This also required a movement threshold that was difficult to adjust. Now, we cast a significantly smaller amount of primary rays on each frame, which spreads the load over several frames and prevents CPU peaks. This approach optimizes the overall cost of continuous ray casting, makes the movement threshold obsolete, and simplifies the implementation while enhancing performance.

Room Containment

Spatial Audio relies on the ability to know what Room a sound is in so that calculations can be made in relation to the listener. This ability has been improved when trying to determine which Room contains the game object (emitter) in order to automatically assign it. If a Room is fully contained inside another Room and does not share any surface with the surrounding Room, the correct Room can be automatically determined without any ambiguity. However, if a Room is partially inside another Room, or the Rooms share at least one common surface, it was previously difficult to determine the correct Room to which to assign the sound. Now, you can assign a priority number to each Room, and the priority system determines the correct Room to assign. If an object is inside several Rooms at the same time (for example, when a Room is inside another Room), the assigned Room is always the one with the highest priority or the innermost Room (according to the result of the ray casting) if priorities are the same. Furthermore, the API now provides a way to overwrite the automatically assigned Room for a given game object. The specified Room replaces the automatically assigned Room until the user explicitly removes the Room association.

Improved Portal Reflections

Reflections now propagate naturally through Portals. Previously, when the emitter and listener were in separate Rooms, the paths between them were found in a piecewise fashion. That approach resulted in a complex process where values to a Portal, from Portal to Portal, and from Portal to listener were constructed by stitching the different paths together. Spatial Audio now uses a stochastic engine for holistic path finding that leverages Portal edge receptors, which allows for propagation just like regular geometry, including the natural propagation of reflections through Portals.

Revised Room Aux Send Model

The revised Room Aux Send Model updates signal flow for diffraction, transmission loss, occlusion, and obstruction while allowing the emitter to send directly to all adjacent Rooms with direct sound paths to the listener. Sounds can therefore traverse through an environment with the reverb contribution across Rooms preserved and contributing appropriately to the resulting mix. Because an emitter now sends to an adjacent Room, when the emitter and listener are in the same Room but close to an open Portal, the sound excites the nearby Room. This change enhances the level of fidelity applied to the emitter's send value and means that diffraction can now be calculated differently for each Room. The attenuation was previously based on the emitter-listener distance and now matches the attenuation for the path length, which adds consistency to the resulting sound.

Reverb Zones

Reverb Zones expand on the existing Rooms and Portals methodology for defining spaces and the connection between them. You can use Reverb Zones to uniquely specify spaces within Rooms without Portal connections. With the complexity of modern environments, it is common to require custom reverbs in areas of the world that are not logically delineated as separate Rooms, and it can be awkward or even impossible to separate them with the use of Portals. The Reverb Zone, which can have some walls or no walls at all, integrates tightly with the sound propagation framework: reverb contributions, diffractions, and reflections are rendered where appropriate.

An emitter plays inside a Room.
The sound propagates out of a Portal, which is connected to a Reverb Zone.
The sound continues to propagate outside to the Reverb Zone's parent Room. When outside, the sound also diffracts around an obstacle defined by geometry.

Possible uses for Reverb Zones include the following: a covered area with no walls, a forested area within an outdoor space, or any situation in which multiple reverb effects are desired within a common space. Reverb Zones have many advantages compared to standard Game-Defined Auxiliary Sends: they are part of the revised Room Aux Send model (refer to the previous section), and form reverb chains with other Rooms; they are spatialized according to their 3D extent; and they are also subject to other acoustic phenomena simulated in Wwise Spatial Audio, such as diffraction and transmission.

Reverb Zones are Rooms, but with two additional properties: a Parent Room and a Transition Region Width. The parent-child relationship between the Reverb Zone and the Parent Room form a hierarchy that allows for sound propagation between them as if they were the same Room, without the need for a connecting Portal. A parent Room can have multiple Reverb Zones, but a Reverb Zone can only have a single Parent. A Reverb Zone cannot be its own parent, and portals cannot connect rooms within the same hierarchy of Reverb Zones. The automatically created 'outdoors' Room is commonly used as a parent Room for Reverb Zones, since they often model open spaces. The Transition Region Width is a region between the Reverb Zone and its parent, centered around the Reverb Zone geometry. It only applies where triangle transmission loss is set to 0.

The Unreal and Unity integrations include Reverb Zone components and related functions.


Reflect Phasing Mitigation

The Reflect plug-in now offers two different ways to reduce phasing: Direct Sound Max Delay and Decorrelation Strength. Direct Sound Max Delay suppresses phasing artifacts that can occur between reflections and the direct sound, while Decorrelation Strength controls a filter that "scrambles" reflected signals just enough so that they sound similar to the unscrambled signal but differ enough to prevent comb-filtering when mixed with the direct signal. The decorrelation filter can also be used to widen stereo fields.

Unreal Obstruction and Occlusion Improvements

There have been several Spatial Audio improvements that simplify the use of Obstruction in Rooms and Portals.

The AkComponent has properties for Collision Channel and Refresh Interval. Previously, these properties were only used to calculate obstruction or occlusion values between emitters and listeners in the same Spatial Audio Room. When there are no Rooms, occlusion is applied, and when there are Rooms, obstruction is applied.

The AkPortalComponent also has properties for Collision Channel and Refresh Interval. Previously, these properties were used to apply obstruction and occlusion values to the Portal that affected all sounds that went through it.

Now, when a non-zero value is specified for the Refresh Interval property of AkComponents and AkPortalComponents, obstruction is calculated between AkComponents and their listeners (even in different Rooms), AkComponents and AkPortalComponents of the same Room, and AkPortalComponents and other AkPortalComponents of the same Room. This calculation accurately informs the amount of filtering applied to the emitted sound. You can use this feature in scenarios that include Spatial Audio diffraction and transmission, but make sure that the Collision Channel for obstruction checks do not include geometries sent to Spatial Audio for diffraction and transmission.

The previous behavior of the AkPortalComponent Collision Channel and Refresh Interval is available through occlusion. It is possible to set an initial occlusion value in the details panel or use a Blueprint function to set an occlusion value at a desired moment. It can be used to modulate sound in response to doors opening or closing.


Authoring

Increased Effects per Object

The number of Effects you can add to an object has increased from 4 to 255. This means it's no longer necessary to restrict the creative application of dynamic or rendered Effects. This change is represented across all views where Effect properties are accessed and includes the ability to insert, edit, and measure Effects.


Loudness Normalization Modes and Target

To provide flexibility and greater control over loudness measurement for short-duration sounds during Loudness Normalization, the addition of a Momentary Max type has been added along with a loudness Target property. You can now choose between the Integrated measurement, which is most useful for long audio program material such as music or cinematics, and Momentary Max, best suited for short individual audio elements such as sound effects. This increases the ability to non-destructively tune the loudness normalization across different sound categories to arrive at a consistent standard to build your interactive mix.



Source Control Support in the Project Migration Dialog

An option has been added to the Project Migration dialog to allow automatic checkout during project migration when using Source Control.

User Interface Enhancements

The following sections detail a variety of user interface improvements that maximize the use of available space, increase comprehension, and support speedy implementation.

Audio Device Secondary Editor

The Audio Device Editor now includes a Secondary Editor where you can access tabs for the Audio Device Meter and Effect Editors. This change keeps the navigation of Effects and their properties in context with the Audio Device while moving meters from the Primary Editor to their own editing tab. It also makes it possible to switch between ShareSets of the cross-platform Mastering Suite without navigating away from the Object Tab.


Meter Tab in the Secondary Editor

Audio busses now include a Meter tab in the Secondary Editor, which you can detach using the "Open in New Window" (Alt+F11) button. These Meter tabs are automatically pinned, meaning they are not associated with any of the four Meter Instances. This allows for an increased number of Meters, expanding opportunities to monitor different parts of the audio hierarchy.



Source Editor and Music Segment Editor Workflow Improvements

The Source Editor now provides greater control over auditioning and marker manipulation. You can position the cursor by clicking anywhere on the timeline ruler at the top. You can then drag the cursor to any position in the source while the precise time is simultaneously displayed. Additionally, holding the ALT key while hovering over a marker, cursor, or cue in the Music Segment Editor displays the time and allows you to drag along the timeline.

Search Tool Improvements

You can now navigate the Search tool results using the arrow keys. The selection of objects in the list of results is reflected in the recycling Object Tab, as well as in the Transport Control, making it possible to audition. You can also right-click an object in the list of results to access the shortcut menu without navigating away from the Object Tab in focus, along with the ability to use keyboard shortcuts with a selection.

User Layouts

Four User Layouts have been added to the Layouts menu to allow you to save customized configurations of views. Now you can create personalized layouts to support specific workflows without modifying any of the default layouts.


Reoriented Effect Editor Tabs

The layouts of all of the Effect editors have been optimized for horizontal presentation within the Secondary Editor. Additionally, the maximum height of the editor is now set to the maximum height of the Effect unless the height of the Secondary Editor has been manually adjusted.




Effect Editor Optimization

Buttons have been added to the Effect list to add and delete Effects, and a new shortcut menu item has been added to insert an Effect at a specific slot. You can now drag and drop Effect ShareSets into the list, and you can select multiple items in the list to copy/paste or delete Effects. Additionally, blank spaces have been optimized for a cleaner presentation of user interface elements.

Blend Track and RTPC Graph Improvements

Small changes to the presentation of the Blend Track and RTPC graphs have been made, overlaying object names while optimizing the space occupied by a graph. Additionally, the graphs have been optimized for different font sizes.


Modernized 3D Graphics Backend

A modernized 3D graphics backend replaces the DirectX 9 implementation and is now used to present the Game Object 3D Viewer, the 3D Meter, and the Audio Object 3D Viewer. This change also brings hardware compatibility improvements along with a host of user experience updates that improve first-person navigation, zoom, and smoother transitions between list item selections in the Game Object 3D Viewer.

Resizable Soundcaster Game Sync Lists

The Soundcaster Game Sync lists are now resizable to accommodate different lengths.

Replaced Display Option Dialogs with Views

Display options for the Capture Log, Audio Object 3D Viewer, and Game Object 3D Viewer are now located in views instead of dialogs. These views can be docked for easy access.



Improved Profiling Synchronization

Synchronization when remote connecting has been improved as part of foundational work toward future live editing improvements. When Profile and Edit (Sync All Modified Objects) is selected, all objects loaded in the remote instance are synchronized upon connection, and as SoundBanks are loaded while connected.

Remote Connection Using IP:PORT without UDP

It is now possible to connect to Wwise in network environments that were previously unsupported.

Line Ending Option for Saved Files

Also included in Wwise 2021.1.14 and Wwise 2022.1.9, you can now choose between two possible line endings for saved files: LF (default) and CRLF.

Wwise applies the selected line ending to all of the following files:

  • The project file: .wproj
  • The project settings file: .wsettings
  • Work Unit files: .wwu
  • Wwise_IDS.h
  • SoundBank files:
    • bank, event, and bus files: .xml, .json, .txt
    • ProjectInfo.xml, ProjectInfo.json
    • PlatformInfo.xml, Platform.json
    • SoundbanksInfo.xml, SoundbanksInfo.json
    • PluginInfo.xml, PluginInfo.json

This option is found in the General tab of the Project Settings.

WAAPI

  • WAAPI Audio File Import
    • Audio File Import is now possible in ak.wwise.core.object.set.
  • New WAAPI Functions
    • New functions have been added to the Wwise Authoring API that provide access to operations and properties available for improving development workflows with Wwise Authoring.
    • Project: Adds control over the Wwise Project Loading and Project Creation.
      • ak.wwise.console.project.close
      • ak.wwise.console.project.create
      • ak.wwise.console.project.open
      • ak.wwise.ui.project.create
    • Mute/Solo: Adds control over the Mute/Solo status.
      • ak.wwise.core.audio.mute
      • ak.wwise.core.audio.solo
      • ak.wwise.core.audio.resetMute
      • ak.wwise.core.audio.resetSolo
    • Objects: Adds support for linking, state, and undo/redo properties.
      • ak.wwise.core.object.isLinked
      • ak.wwise.core.object.setLinked
      • ak.wwise.core.object.setStateGroups
      • ak.wwise.core.object.setStateProperties
      • ak.wwise.core.undo.redo
    • Profiler: Adds more Profiler information and adds the ability to save capture data.
      • ak.wwise.core.profiler.getCpuUsage
      • ak.wwise.core.profiler.getLoadedMedia
      • ak.wwise.core.profiler.getPerformanceMonitor
      • ak.wwise.core.profiler.getStreamedMedia
      • ak.wwise.core.profiler.saveCapture
    • Source Control: Adds Source Control operations.
      • ak.wwise.core.sourceControl.add
      • ak.wwise.core.sourceControl.checkOut
      • ak.wwise.core.sourceControl.commit
      • ak.wwise.core.sourceControl.delete
      • ak.wwise.core.sourceControl.getSourceFiles
      • ak.wwise.core.sourceControl.getStatus
      • ak.wwise.core.sourceControl.move
      • ak.wwise.core.sourceControl.revert
      • ak.wwise.core.sourceControl.setProvider
    • Misc
      • ak.wwise.core.executeLuaScript
      • ak.wwise.debug.generateToneWAV
      • ak.wwise.core.log.addItem
      • ak.wwise.core.log.clear
    • Topics:
      • ak.wwise.core.log.itemAdded: new 'SourceControl' channel added
  • WAAPI Work Units and Perforce
    • Source Control operations are now supported through the command line with waapi-server mode.
    • WAAPI operations involving Source Control are now completely silent and do not cause pop-ups.
    • Added support for Work Unit operations inside WAAPI functions.
    • Added support for Automatic Source Control Check Out and Add in several WAAPI functions.
    • Added complete Source Control API through WAAPI.
  • Object Model
    • The following elements have been refactored and are now accessible in WAAPI and WAQL:
      • Playlist in Random/Sequence Container
      • Markers in Audio File Source
      • Associations and Arguments in Music Switch Container
      • Entries in Dialogue Events
      • Trigger and Segment Reference in Music Stinger
      • Transition Root in music objects
      • Playlist Root in Music Sequence Container

WAQL

  • Added list support to WAQL.
  • Added the following list functions:
    • count(CONDITION?)
    • any(CONDITION?)
    • all(CONDITION?)
    • first(CONDITION?)
    • last(CONDITION?)
    • take(NUMBER)
    • skip(NUMBER)
    • at(NUMBER)
    • where(CONDITION)
  • Added aliases in return expressions.
  • Added support for returning complex structures and arrays.
  • Added the following accessors in WAQL expressions and queries:
    • nodeType
    • originalRelativeFilePath
    • loudness.integrated
    • loudness.momentaryMax
    • stateProperties
    • stateGroups
    • validity.details
    • validity.severity
    • validity.isValid

Integrations

Unreal Wwise Browser

Also included in Wwise 2022.1.5, the Wwise Browser has replaced the Wwise Picker in the Wwise Unreal Integration. It includes the status of Wwise and Unreal assets, and also has filters for SoundBanks, Uassets, and Types, which you can use to focus on different aspects of project status. Through the new right-click shortcut menu, you can navigate the project from Unreal, and perform transport control operations, among others. Refer to Managing Assets with the Wwise Browser for further details.




Asset Reconciliation

Asset reconciliation was added to the Wwise Browser in Wwise 2022.1.6. You can now reconcile assets that are out of sync through the Wwise Browser itself or with a commandlet. For more information, refer to Reconciling Wwise UAssets.

Auto-sync can be enabled with a module.



Unreal Auto-Defined SoundBank Improvements

There were general improvements to the stability of the Auto-defined SoundBank workflows based on feedback and suggestions from the Wwise community.

Unity Integration: Support Enter Play Mode Options

The Wwise Unity Integration now includes "Enter Play Mode" options to reduce the time needed to enter play mode from the Editor.

SDK

Marker Callbacks

The marker callback notification structure, AkMarkerCallbackInfo, now provides the size of the label text. Developers can use this information to embed custom data within WAV file markers to trigger game events.

TTS Source Plug-in Development

To facilitate the development of TTS (Text-To-Speech) source plug-ins, the SDK now allows source plug-ins to send data to and receive data from the game.

Using AK::IAkSourcePluginContext::GetPluginCustomGameData(), a plug-in can retrieve data sent to it from the game through AK::SoundEngine::SendPluginCustomGameData().

Source plug-ins can now generate marker notifications. See Marker Generator Source Plug-in Example.

Within a source plug-in, AK::IAkPluginServiceMarkers::CreateMarkerNotificationService() can be used to create an instance of AK::IAkPluginServiceMarkers::IAkMarkerNotificationService. AK::IAkPluginServiceMarkers::IAkMarkerNotificationService::SubmitMarkerNotifications() can then be used to send marker notifications to the game using the AkAudioMarker structure.

Plug-ins

AK Channel Router Transformation

AK Channel Router has transformed from a mixer plug-in to an Object Processor Effect plug-in. You can use the AK Channel Router plug-in to route and mix multiple busses with different channel configurations into a single bus, which is especially useful if you have an output device with many channels that require non-standard routing. After you insert this Effect on a bus, you can add a Channel Router Setting Metadata to any of its child busses to configure the channel to which the child bus outputs. This plug-in can only be inserted on busses that have their Bus Configuration set to Audio Objects. You cannot add the plug-in to a Master Audio Bus.


More Platforms Supported by the Motion Plug-in

You can now use the Motion plug-in to provide haptics and vibration effects on iOS, tvOS, and macOS.

Experimental

Live Media Transfer

This in-development feature aims to decrease iteration time when connected to a game by allowing you to add or modify sound media and hear the results without regenerating SoundBanks. This can be done for audio sources, impulse response files used by the AK Convolution plug-in, and MIDI. You can create new Sound SFX and Music Segments using the usual workflows, such as the Audio File Importer dialog or drag and drop of WAV files into the Project Explorer.

Live Media Transfer is experimental and not enabled by default. You can enable it in the User Preferences and also limit the amount of memory used by the feature with the Max memory for Media updates setting.


New and modified media does not replace files on disk or in the game's packages; it resides in memory until the game is terminated. Only the media that has been added or modified in the Wwise project and is currently in use by the game is transferred.

You can track the amount of memory used by Live Media Transfer in the Loaded Media tab of the Advanced Profiler. In the SoundBank Name column, media transferred in relation to this feature is labeled “From Authoring”.

The amount of memory used by Live Media Transfer is also included in the Used value in the Memory tab of the Advanced Profiler and in the Total Used Memory in the Performance Monitor. Keep in mind that this contribution is temporary and the amount of memory used by the packaged game will be lower.

Known issues:

  • When you add or modify media, you might notice a slight delay the first time the media is played.
  • New media is played on the next playback of the sound; it does not immediately replace currently playing sounds.
  • Looping sounds don't update until they are stopped and restarted.

Please report any other issues encountered while using this experimental feature to help us stabilize it for a future release.


Was this page helpful?

Need Support?

Questions? Problems? Need more info? Contact us, and we can help!

Visit our Support page

Tell us about your project. We're here to help.

Register your project and we'll help you get started with no strings attached!

Get started with Wwise