We are developing an audio playback system for an XR cinema application for Meta Quest.
Using Wwise 2024 I have been able to monitor and Profile a full 3d Atmos film mix (ADM) including Dolby's Audio Objects with their own metadata, i.e. moving sound sources embedded in the ADM. The question is whether this can be decoupled from the Listener's head orientation to be "north-locked" instead of "head-locked", so that the Dolby Atmos mix stays fixed to a central point, i.e. a virtual cinema screen in XR. As soon as I try to spatialise the sound or lock it to a position, we seem to lose all the spatialisation that is within the multichannel sound file. It is as if the sound source is now being treated as mono. Is there a means to avoid or circumnavigate this?
In game engines we have achieved what we are after by splitting a multichannel audio mix (eg 7.1.4) into multi mono emitters that are locked in 3d space, whereby we can then walk around the virtual speaker array, which is very effective and immersive for VR applications. We are hoping to recreate this using Atmos within Wwise, without the interim stage of splitting up an existing multichannel format.