menu
Wwise
arrow_drop_down
Strata
arrow_drop_down
Solutions
arrow_drop_down
Apprendre
arrow_drop_down
Communauté
arrow_drop_down
Documentation
arrow_drop_down
Aide
arrow_drop_down
2024.1.4.8780
2023.1.12.8706
2022.1.18.8567
2021.1.14.8108
2019.2.15.7667
2019.1.11.7296
2018.1.11.6987
2017.2.10.6745
2017.1.9.6501
2016.2.6.6153
2015.1.9.5624
The Object-based audio pipeline allows Wwise to deliver individual audio buffers, along with their Metadata, to the part of a platform’s operating system in charge of audio, known as an endpoint. Endpoints that support 3D audio can use this Metadata, which includes a 3D position and orientation, to render their own spatialized effects. This results in the best possible spatial precision in the delivery of the sound because the endpoint is aware of the listening configuration. Therefore, the endpoint can use the most appropriate rendering method to deliver the final mix over headphones or speakers.
The combination of an audio buffer and its corresponding Metadata is referred to as an Audio Object. Audio Objects in the Bus Hierarchy differ from System Audio Objects sent to the endpoint. When a bus configuration is set to Audio Objects, the bus can hold different types of Audio Objects, from 3D objects to multi-channel, non spatialized objects.
Each Audio Object can have:
3D spatialization information or not.
Plug-in metadata or not. (This could be inserted directly on the Audio Object from the Actor-Mixer Hierarchy, or Metadata could be propagated to the Audio Object as it passes through a bus that has a Metadata plug-in.)
One or many channels.
The following sections provide details on authoring Audio Objects and ensuring they reach the endpoint:
Des questions ? Des problèmes ? Besoin de plus d'informations ? Contactez-nous, nous pouvons vous aider !
Visitez notre page d'AideEnregistrez votre projet et nous vous aiderons à démarrer sans aucune obligation !
Partir du bon pied avec Wwise