Wwise SDK 2023.1.8
|
Sounds are played in Wwise by posting Events to objects that the game registers and optionally assigns a position. These game objects thus become emitters. The relative position of the simulated 3D environment emitters and listeners may be used to drive panning and other properties of sounds, which helps the auditive and visual experiences immerse the player in a credible 3D environment.
In Wwise, the designer may choose to pan a sound on the player's speaker setup manually (Speaker Panning), or based on game object position (3D Spatialization). Additionally, it may assign a set of attenuation curves and settings which modify the sound properties (volume, filtering, and so on) based on distance or angle.
Call the AK::SoundEngine::SetPosition()
function for every game object whose sounds require this information. Also, make sure to call it for the listener game object. You need to set the position every time the position of a game object changes.
The first parameter is the ID of the game object. Refer to Integration Details - Game Objects for more information.
The second parameter is an AkTransform
structure that contains vectors representing the position and orientation of the game object. Refer to X-Y-Z Coordinate System for information regarding how the X, Y, and Z axes are defined in the Wwise sound engine.
Note: If you call the AK::SoundEngine::SetPosition() function multiple times and then call the AK::SoundEngine::RenderAudio() function, only the last value will be considered. |
In order to mimic a behavior where the emitter position could automatically stick to the listener position, you need to explicitly set the game object's position to the same value as the listener position at every frame. You can also post Events on the listener game object.
Instead of using the AK::SoundEngine::SetPosition()
function, which only allows a single 3D position for each game object, you can use the AK::SoundEngine::SetMultiplePositions()
function, which allows you to set multiple 3D positions for a single game object.
Select the MultiPositionType
carefully. The option you choose depends on the situation and the type of effect you are trying to create.
MultiPositionType_MultiSources
, the different sound volumes are added, which simulates multiple objects emitting the same sound at the same time. This function can be useful in many different situations, and works well when you have many objects that emit the exact same sound. This function might, however, cause certain issues when the sounds are too close to each other. The problem being that by adding volumes, multiple sounds will be playing exactly in phase, which increases the chance of clipping.MultiPositionType_MultiDirections
, the volume played in each speaker is the maximum in each direction. This function can be used to simulate a variety of different situations in game, including wall openings, area sounds, as well as multiple objects emitting the same sound at the same time. If, however, the game player expects the sound to grow louder when two or more objects emitting the sound are in close proximity, add the sounds with MultiPositionType_MultiSources
.MultiPositionType_SingleSource
with SetMultiplePositions()
because it only takes the first listed position into account.MultiPositionType_MultiDirections
.The AK::SoundEngine::SetObjectObstructionAndOcclusion
API applies a single obstruction and occlusion value to all sound positions. If instead you use AK::SoundEngine::SetMultipleObstructionAndOcclusion
, unique obstruction and occlusion values are assigned to each sound position. For more information, refer to Obstruction and Occlusion with Game-defined Auxiliary Sends.
By creating multiple positions for a single game object, you can simulate a number of different sound effects, including:
You can simulate area sounds by using the MultiPositionType_MultiDirections
option combined with appropriate values for Spread and distance-based Volume Attenuation. When using this method, you need to make sure that the number of positions does not increase the volume additively. By recreating area sounds using this method, you can have a single sound coming from multiple directions with realistic attenuation.
In the following example, the black dots represent the initial position of sounds, the black circles represent the minimum distance for attenuation and spreading, and the pink circles represent the maximum radius:
The blue region represents a lake in your game. The ambience sounds emanating from the lake are simulated using a sound with four positions.
When the listener is at position A, the sound of the lake should come from all directions. This can be achieved by setting an appropriately high spread value, which diffuses the sounds so that they play back in all speakers.
When the listener is at position B, they are beyond the maximum attenuation distance. This means that they will either hear no lake sounds or they will hear only a faint sound at its maximum attenuation. Because you are using multiple positions to simulate a large lake, the sound will always comes from the appropriate direction.
When the listener is at position C, they will hear the lake from a wide angle of speakers (~180 degrees), but the sounds will be attenuated because the listener is a certain distance away from the lake. In this situation, the listener is still within the attenuation radius, so the lake sounds will not be attenuated to their maximum levels.
By using this technique, you can recreate any kind of object shape by overlapping multiple sound positions. Keep in mind, however, that with each new position added, more CPU is required to calculate the resulting volumes.
Let's say, for example, that in one of your levels, the corridors are all lit by a series of torches on the walls. These torches are all the same and will play the exact same sound. If you post a Play Event for each of them separately (supposing there are 20 of them), it would consume significantly more CPU and memory. Also, if the sound is being streamed, you may potentially have multiple duplicated streams, which can increase I/O access.
In this situation, using SetMultiplePositions()
would greatly improve performance. It also allows you to easily control all sounds in one operation, and reduces the number of game objects that need to be registered.
When creating this scenario in game, you can use either MultiPositionType_MultiSources
or MultiPositionType_MultiDirections
.
MultiPositionType_MultiSources
will usually be more accurate, but may be problematic if positions are too close to one another. Remember that when only one voice is played to simulate multiple sounds, Wwise adds the volumes of the different positions, which may result in the volume going over 0 dB. MultiPositionType_MultiDirections
uses a bit less CPU, and simply takes the maximum volume and plays it through each speaker. Since MultiPositionType_MultiDirections
does not add the volumes, it guarantees that there will be no clipping and, therefore, may be more suitable.
Despite there being multiple positions for a sound, there is still only one voice played by the sound engine. This means that when all sounds are out of range, only one virtual voice will be processed.
Let's say a car is blasting stereo-recorded music from its speakers—the emitters. As seen in the image below, the left and right emitters could output respectively the left and right input channels of the music's stereo source file.
In this situation, using the SetMultiplePositions()
overload with the AkChannelEmitter
parameter provides flexible assignment of input channels. The AkChannelMask
specifies which input channel, the left or right, the emitter positions use. Therefore, if the audio content is different in the left and right channels of the source, the listener will hear these differences relative to its current position.
Note: The listeners to the left and the right of the car in the above image will hear a combination of the left (red) and right (blue) emitters in a single output channel, which is consequently highlighted in a shade of purple. The exact combination of the emitters depends on the attenuation distances and curves set for them. For example, a listener at a certain distance to the right of the car is likely to hear the right speaker more than the left, which would have greater attenuation. At a certain distance to the right, the listener may not even hear the left speaker at all. |
When sounds are partially obstructed and/or occluded, they may appear to come from different directions than where they are actually located. In these cases, you can position the sound in multiple locations to create the effect that you want.
For example, let's say that the direct path of a sound is blocked by a building—see the following illustration. Instead of using the real position of object A, it would sound more realistic to use the two alternate positions: A' and A''. When doing so, use SetObjectObstructionAndOcclusion()
and the same occlusion and obstruction parameters will apply to all positions, or use SetMultipleObstructionAndOcclusion()
and specify a unique occlusion and obstruction value for each sound position.
This approach also works well when simulating environments that have been destroyed, and sounds are coming from a number of arbitrary positions. When re-localizing sounds, it is better to use MultiPositionType_MultiDirections
.
The Wwise sound expects a left-handed coordinate system. The default values are the following, although the only thing that matters is the position and orientation of emitters relative to the listener orientation frame:
For an example of integrating 3D positions, refer to 3D Position Example.
Questions? Problems? Need more info? Contact us, and we can help!
Visit our Support pageRegister your project and we'll help you get started with no strings attached!
Get started with Wwise