Wwise SDK 2024.1.1
|
The Integration Demo application contains a series of demonstrations that show how to integrate various features of the sound engine in your game.
Note: All code presented in this section is available in a sample project in the samples\IntegrationDemo\ directory. |
The Wwise project for this program is also available in samples\IntegrationDemo\WwiseProject
.
Note: The Wwise project for this program uses various audio file conversion formats, some of which may not be available depending on which platforms are supported by your Wwise installation. After opening the project in Wwise, you may see warnings such as:
You can remove these messages by changing the conversion format for all unavailable platforms to PCM. Refer to the following topic in the Wwise User Guide for more information: Converting Audio Files. |
SoundBanks for this project are also installed with the SDK in the samples\IntegrationDemo\WwiseProject\GeneratedSoundBanks
folder.
To regenerate the SoundBanks, make sure to do the following in the SoundBank Manager:
Once these settings are correct, you can click on Generate in the SoundBank Manager to generate the banks.
The Integration Demo binaries are available in the \[Debug|Profile|Release]\bin
directory. If you would like to rebuild the application yourself, follow these steps:
samples\IntegrationDemo\Windows
and build using the desired configuration.To run the Integration Demo, simply launch the executable found in the directory mentioned above.
samples/IntegrationDemo/Mac
and build using the desired configuration.Mac/[Debug|Profile|Release]/bin
. samples/IntegrationDemo/iOS
, samples/IntegrationDemo/tvOS
or samples/IntegrationDemo/visionOS
and build using the desired configuration. samples\IntegrationDemo\WwiseProject
and generate the SoundBanks for Android in their default paths.samples\IntegrationDemo\Android
.samples\IntegrationDemo\Android\Android_[armeabi-v7a|arm64-v8a|x86|x86_x64]\[Debug|Profile|Release]\bin
. For example: samples\IntegrationDemo\Android\Android_armeabi-v7a\Debug\bin
.adb install IntegrationDemo.apk
.Alternative build method with Android Studio If you have Android Studio installed, we provide a gradle project that will compile and deploy the Integration Demo. Simply open the samples\IntegrationDemo\Android
directory in Android Studio and you should be able to build and launch the app.
Note: You will need to use the software keyboard or hardware keyboard to interact with the Integration Demo. Navigation is achieved using the W, A, S, and D keys. Use Enter to select and Space to return. |
samples\IntegrationDemo\WwiseProject
and generate the SoundBanks for OpenHarmony in their default paths.SDK\samples\IntegrationDemo\OpenHarmony
.SDK\samples\IntegrationDemo\OpenHarmony
in DevEco Studio IDE.SDK\samples\IntegrationDemo\OpenHarmony\build-profile.json5
arm64-v8a_Debug
. Choosing "default" here will not work.cd SDK/samples/IntegrationDemo/Linux
IntegrationDemo_Linux.make
and the variables documented in Linux-Specific Information.make -f ./IntegrationDemo_Linux.make AK_LINUX_ARCH=x64 config=debug
SDK/Linux_x[32|64]/[Debug|Profile|Release]/bin
. You can navigate through the Integration Demo on Windows using either the keyboard, a connected controller, or any DirectInput compatible device.
Certain controls (such as Toggle Controls and Numeric Sliders) allow you to change values. To change their values, hit the left and right arrow keys or the left and right buttons on a gamepad's directional pad.
Tip: The application has an online help feature! To access the Help page, press F1 on the keyboard or the START button on a gamepad. |
The code behind each demonstration can be found in the samples\IntegrationDemo\DemoPages
directory. For example, the code for the Localization demo will be in the DemoLocalization.h
and DemoLocalization.cpp
files in that directory.
Tip: Pertinent information about each demo can also be found in the Integration Demo application's online help. |
This demo shows how to implement localized audio. Localized sound objects are found in language-specific SoundBanks in subdirectories in the SoundBank generation directory. We achieve the localization effect by unloading the current SoundBank and reloading the desired language-specific SoundBank.
Use the Language toggle control to switch the current language. Then press the Say Hello button to hear a greeting in the selected language.
For more information about languages and localization, see Integration Details - Languages and Voices.
The Dynamic Dialogue demo runs through a series of tests that use Wwise's Dynamic Dialogue features. Each of these tests demonstrates a different control flow so that you can hear the effect it produces:
For more information about Dynamic Dialogue, see Integration Details - Dynamic Dialogue
This demo shows how to use RTPCs. The RPM numeric slider is linked with an RTPC value (RPM) associated with the engine. Press the Start Engine button to start/stop car engine audio. Use the RPM slider to change the RTPC value and hear the effect.
For more information about RTPCs, see Integration Details - RTPCs.
This demo shows various ways to implement footsteps in a game. It also shows surface-driven bank management to minimize both media and metadata memory when a surface isn't in use. Finally, this demo also shows a very simple case of environmental effects.
In this example, the footstep sounds are modified by two variables: walking speed, and walker weight.
With each surface, we show a different way of dealing with the sound samples and variables. These are only suggestions and ideas that you can use in your own structure.
Bank management In the Footsteps demo, the banks were divided into four media banks (one per surface). We divided the screen in four with a buffer zone between each surface where both banks are loaded. This is to avoid a gap in the footsteps due to bank loading. In the SoundBank Manager, look at the GameSync tab. Note that each surface bank includes only the corresponding surface Switch. This will include only the hierarchy related to that Switch in the bank - nothing else. In a large game, this setup has the advantage of limiting the amount of unused samples in a particular scenario, thus limiting the memory used. For level or section-based games, it is easy to identify the surfaces used as they are known from the design stage. For open games, this is trickier and depends a lot on the organization of your game; but, it can still be achieved. For example, it is useless to keep the "snow and ice" surface sounds in memory if your player is currently in a warm city and won't be moving toward colder settings for a long time.
This demo shows how you can set up a callback function to receive notification when markers inside a sound file are hit. For this demonstration, we are using the markers to synchronize subtitles with the audio track.
For more information on markers, see Integrating Markers.
This demo shows how to use music callbacks in general. Beat and bar notifications are generated from music tempo and time signature information.
This example shows how to force a random playlist to select its next item sequentially. The playlist item may be stopped via the callback as well.
Shows MIDI messages the game can receive using callbacks. MIDI messages include the MIDI notes, CC values, Pitch Bend, After Touch, and Program Changes.
For more information on music callbacks, refer to Integration Details - Music Callbacks.
This example uses a Music Switch Container. Try switching the States by triggering the Event listed in the demo page. Switching States might produce a result that is immediate or occurs at the time specified in the rules of the Music Container.
This example demonstrates the use of the MIDI API. Press the Start Metronome button to simulate an active metronome. Then select the BPM slider and press LEFT or RIGHT to change its value. The demo uses a registered callback function to post MIDI Events to the sound engine via the PostMIDIOnEvent
function.
This example shows how to handle the DVR legal requirements for Xbox One and PS4. Since many games include copyrighted music, it is often not permitted to record it with the built-in DVR. This demo shows the differences between a sound that is recorded by the DVR and one that isn't. Please refer to the Wwise Project and check the setup of the sounds in the BGMDemo folder, paying attention to their routing and which Audio Device they use. The Non-Recordable sound will be routed to a bus that outputs to the DVR_Bypass output.
These demos show various ways to do 3D positioning in Wwise.
A helicopter sound starts playing as soon as you enter the page. Move the 'o' around in X and Z, the plane of the screen, using the following keys:
This demo sets only a single position.
This demo sets two positions.
This demos shows positioning applied only in the bus hierarchy. With Position + Orientation 3D Spatialization and Attenuation applied to the bus alone, the sound engine only applies the spatialization after the 3 child sounds are mixed together.
This demo illustrates how a movable emitter and listener can interact with each other. Notably, a Room with a Portal shows:
The 3D bus applies a reverb Effect, after which the output is positioned and spatialized before being mixed in the Master Audio Bus.
This demo allows the experience of a movable emitter and listener interacting with each other in combination with two different "Portaled" Rooms. Depending on the positions of the game objects, the Rooms may be close enough to be excited by the emitter output.
The 3D bus applies a reverb Effect, after which the output is positioned and spatialized before being mixed in the Master Audio Bus.
These demos show various ways to use Spatial Audio to model sound propagation across Rooms, Portals, and Geometry.
Each demo page includes a movable emitter and a movable listener. You can offset the listener from a Distance Probe to simulate a third-person listening experience.
This demo shows the effect of Portals in Spatial Audio positioning. There are two Rooms with Portals and visible sound propagation paths. The resulting diffraction and transmission amounts are displayed in the lower-left corner. Same-room obstruction (emitter-listener and portal-listener) is calculated through a combination of Portal-driven propagation and a native game-side obstruction algorithm. Game object spread is calculated for the emitter with its set radial value. Finally, this demo shows how to use a Room to play multi-channel ambient sounds / room tones that contract and become point sources at portals.
This demo shows the effect of Portals in conjunction with the Spatial Audio Geometry API. There are two Rooms with Portals, Geometry for the wall inside and outside of the Room, an obstacle and visible sound propagation paths. The resulting diffraction and transmission amounts are displayed in the lower-left corner. Spatial Audio is set up so that diffraction/transmission controls both the project-wide curves and the built-in parameters, although only the former is used in the Wwise project. Spatial Audio handles diffraction and transmission using Portals and Geometry respectively. The demo does not compute additional obstruction and occlusion.
This demo showcases the Wwise Spatial Audio Geometry API, usable for direct (dry) path diffraction and transmission. There are two walls and visible diffraction paths. The resulting diffraction and transmission amounts are displayed in the lower-left corner. Spatial Audio is set up so that diffraction/transmission controls both the project-wide curves and the built-in parameters, although only the former is used in the project.
This demo showcases the Wwise Reflect plug-in using the Geometry API to simulate early reflection. There is a room and a separate wall, defined by Spatial Audio Geometry, and visible reflection paths. Spatial Audio is set up with a reflection order displayed in the lower-left corner. Additionally, Spatial Audio allows reflection paths to diffract. The room Geometry in this demo can change texture and be scaled in size demonstrating the adaptability of the Geometry API.
This demo showcases the Reflect plug-in within the context of Rooms and Portals. There are five Rooms, connected by Portals that can be toggled open or closed, some additional walls defined by Spatial Audio Geometry, and visible reflection paths.
This demo demonstrates the use of a Reverb Zone to create a space that has its own reverb effect and transitions into the outside Room without the use of Portals. There is a Room with a Portal that connects to a Reverb Zone, forming something like a covered balcony. The Reverb Zone's parent is the outdoor Room. There is also Geometry outside to show how paths can diffract around geometry, pass through the transparent surfaces of the Reverb Zone, and then continue through portals.
This page shows how to use the PrepareBank and PrepareEvent API functions.
When the page is loaded, a PrepareBank operation loads the lightweight structure and event data referenced by this demo, without loading any actual media into memory. When the user moves the cursor to an area button, a PrepareEvent operation loads the corresponding media file (a .wem file on disk) into memory in anticipation of a future PostEvent. When the user finally enters the area, the event is posted and the media is ready to play.
This demo shows how to use external sources. Both buttons play the same sound structure but set up at run-time with either sources "1", "2" and "3", or sources "4", "5" and "6".
Additionally, the external sources are packaged in the File Packager and loaded when opening the demo page. Refer to the Wwise Help for more information on the File Packager, and to the Streaming / Stream Manager chapter for more details on the run-time aspect of file packages.
This demo demonstrates the pros and cons of automatic event bank generation. When this option is selected in the Project Settings, Wwise generates individual banks for any events that are not contained in manually created banks. However, these auto-banks do not contain any media, only structure and event data. To load media associated with these events, the game must call either AK::SoundEngine::PrepareEvent or AK::SoundEngine::SetMedia.
This demo shows how to record audio from a microphone and input it in the Wwise sound engine. In the Integration Demo, select the Microphone Demo and speak into the microphone to hear your voice played back by the Wwise sound engine. Toggle the Enable Delay to hear an example of how audio data fed to the Audio Input plug-in can be processed like any other sound created in Wwise.
Each platform has a very different core API to access the microphone. Check the SoundInput
and SoundInputMgr
classes in the Integration Demo code to see how they interact with the AudioInput plug-in.
Note: This demo is available on the following platforms: Windows, macOS, iOS, and tvOS. |
This is a multiplayer demonstration which shows how to integrate Wwise's motion engine into your game.
In this demonstration, each player has the option to either close a door in the environment or to shoot a gun that they are holding. A listener is set for each player which is active on the door game object as well as the player's own gun. This way, if any player closes the door in the environment, all players receive force feedback reactions. However, only the players who fired their weapon receive force feedback for that Event. Additionally, on PS4 and PS5, the gun sound will only play on the gamepad speaker of each player.
Note: On Windows, a player using a keyboard should plug in a gamepad to participate in this demo. |
This code demonstrates the use of secondary outputs, Wwise Motion, and Listener/Emitter management.
This page allows viewing of several initialization settings for the Sound Engine. You can also choose the audio output for the whole application. The sample code shows how to initialize & terminate the Sound Engine and also how to select different physical audio outputs. Please refer to the specific sections below for more details on the Sound Engine initialization.
The Integration Demo as well as its Wwise Project are kept very simple in order to demonstrate the basics of sound engine integration. For a more realistic integration project, refer to the AkCube Sound Engine Integration Sample Project.
Questions? Problems? Need more info? Contact us, and we can help!
Visit our Support pageRegister your project and we'll help you get started with no strings attached!
Get started with Wwise