menu
 
Version
2021.1.14.8108

2024.1.4.8780

2023.1.12.8706

2022.1.18.8567

2021.1.14.8108

2019.2.15.7667

2019.1.11.7296

2018.1.11.6987

2017.2.10.6745

2017.1.9.6501

2016.2.6.6153

2015.1.9.5624


menu_open

Understanding Object-Based Audio

The Object-based audio pipeline allows Wwise to deliver individual audio buffers, along with their Metadata, to the part of a platform’s operating system in charge of audio, known as an endpoint. Endpoints that support 3D audio can use this Metadata, which includes a 3D position and orientation, to render their own spatialized effects. This results in the best possible spatial precision in the delivery of the sound because the endpoint is aware of the listening configuration. Therefore, the endpoint can use the most appropriate rendering method to deliver the final mix over headphones or speakers.

The combination of an audio buffer and its corresponding Metadata is referred to as an Audio Object. Audio Objects in the Bus Hierarchy differ from System Audio Objects sent to the endpoint. When a bus configuration is set to Audio Objects, the bus can hold different types of Audio Objects, from 3D objects to multi-channel, non spatialized objects.

Each Audio Object can have:

  • 3D spatialization information or not.

  • Plug-in metadata or not. (This could be inserted directly on the Audio Object from the Actor-Mixer Hierarchy, or Metadata could be propagated to the Audio Object as it passes through a bus that has a Metadata plug-in.)

  • One or many channels.

The following sections provide details on authoring Audio Objects and ensuring they reach the endpoint:


Cette page a-t-elle été utile ?

Besoin d'aide ?

Des questions ? Des problèmes ? Besoin de plus d'informations ? Contactez-nous, nous pouvons vous aider !

Visitez notre page d'Aide

Décrivez-nous de votre projet. Nous sommes là pour vous aider.

Enregistrez votre projet et nous vous aiderons à démarrer sans aucune obligation !

Partir du bon pied avec Wwise