Introduction To Audio In VR: Opportunities and Challenges

Audio spatialisée / Expériences en RV

Among the many new technologies on the market, virtual reality is one of the most in demand, not only by consumers but also by the enterprise and the government sectors.

By 2020, the VR industry is expected to reach $120 billion. In the report by Digi Capital, it states that the arrival of high-end VR equipped with powerful CPU and GPU chipsets and premium features will be the initial consumer VR market drivers. Companies, like Samsung with their Gear VR, are expected to make big changes in the industry. The same report mentioned that the Korean tech giant is making great improvements in addressing user demands, such as display and wireless audio output. It is evident that Samsung is focusing on VR with the massive upgrades on the Galaxy S8. As featured by O2, the handsets come with VR-focused features, such as the ‘infinity display,’ a 64-bit octa core processor (4 GB RAM), and Bluetooth 5.0. These technologies promise to deliver a reliable and robust audio connection (four times the range, two times the speed, and eight times the broadcast message capacity) for a more immersive VR experience.

banner-1571999_960_720 (1).jpg

When it comes to virtual reality, it’s common for many users to discuss VR apps based solely on the image content presented to them and less on its aural score. Just how important is 3D audio in virtual reality? The truth is that for VR to be truly immersive, it needs convincing sound to match. Badly implemented audio in VR can be off-putting and can impact user acceptance.

What 3D audio offers is a little map in your brain, Engadget author Mona Lalwani explained, even when you are not viewing objects it will allow you to know where things are even if they are not in your field of view.

“The premise of VR is to create an alternate reality, but without the right audio cues to match the visuals, the brain doesn't buy into the illusion. For the trickery to succeed, the immersive graphics need equally immersive 3D audio that replicates the natural listening experience,” Lalwani stated.

However, there are fundamental problems that need to be addressed, one of which is externalising sounds. In television shows and films, you will notice that the sound of the narrator is different from the people you’re actually seeing on the screen due to the way in which it was recorded (it’s recorded closer than ‘on set’ dialogue with a condenser microphone). The voice of the narrator is the audio inside the viewer’s head, which is what developers need to avoid to achieve a seamless virtual environment for users.

Developers need to ensure that the perspectives are convincing. How you give players information about distance is vital for users to be able to localise something accurately in the virtual reality space. There’s a need to recreate the acoustic behaviour in VR for it to be convincing with a delicate balance between the volume of the sound waves (directly to the ears, bounce around the room, the length of reverberation, the ratio between direct and indirect sound, and perceived loudness).

Another way to recreate a natural listening experience is by making a binaural recording that creates a clear distinction between left and right sound. It is an important element to  successful 3D audio as it helps the brain of the user to pinpoint the exact source of the sound. However, it may not apply to all directions as the sounds coming from the back and the front are more ambiguous. A response called ‘Head-Related Transfer Function’ (HRTF) is created when the sound from the front interacts with the outer ears, neck, shoulders, and head. This audio gets coloured with modification that assists the brain in solving confusion. Overall, binaural recording embodies the core of a personalised immersive audio.

Developers also need to consider the limitation of a human brain. Audio’s power is to give users information or to influence their emotional state, but there’s a limit to the amount of auditory information that they can process. Film editor Walter Murch said it needs to apply the ‘Law of Two and a Half,’ wherein a human can easily isolate two sets of sounds, but the third sound takes out the ability of the brain to distinguish individual elements. The sound needs to be light and shaded when presented to users. However, more than two positional cues at a time can still be effective if there’s a need to briefly disorient the player on purpose.

Although this article may hold a possible solution, the truth is that the audio for VR is still a work in progress. But, the combination of 3D audio and head-tracking seems to compliment and make virtual reality complete.

"Audio, from an evolutionary perspective, is the thing that makes you turn your head quickly when you hear a twig snap behind you," said Joel Susal, director of Dolby’s AR and VR business. "It's very common that people put on the headset and don't even realise they can look around. You need techniques to nudge people to look where you want them to look, and sound is the thing that has nudged us as humans as we've evolved."

TechJVB

Blogger

Freelance

TechJVB

Blogger

Freelance

TechJVB is certified audiophile and gaming expert with expertise in AR, VR, AI and the likes. She has attended several tech conferences in Europe and Asia. She has been invited to be a guest speaker at different schools in Manchester to inspire young minds to enter STEM fields and explore the bigger potential and future of technology.

Commentaires

Andrew Menino

June 25, 2017 at 01:38 pm

Great article! Looking forward to investigating the "Law of Two and a Half"! Thanks a lot

Laisser une réponse

Votre adresse électronique ne sera pas publiée.

Plus d'articles

Réintroduction de Wwise Audio Lab (WAL)

Wwise Audio Lab (WAL) est un environnement 3D open-source semblable à un jeu vidéo, développé avec...

15.7.2022 - Par Damian Kastbauer

Impacter et Unreal Engine | Comment contrôler le plugiciel Impacter en utilisant la physique du moteur de jeu

Introduction Impacter est un nouveau prototype de plugiciel de modélisation de sons d'impacts pour...

10.11.2022 - Par Sean Soraghan

atmoky Ears | Plug-in d'audio spatialisé hyperréaliste pour écouteurs pour Wwise

atmoky Ears est la solution pour créer des expériences en audio spatialisé hyperréalistes aux...

7.4.2023 - Par Markus Zaunschirm

Nouveauté de Wwise Spatial Audio 2023.1 | Révision du modèle d'envois auxiliaires

Si vous avez parcouru la liste des nouvelles fonctionnalités de Wwise 2023.1, et en effet, il y en a...

15.12.2023 - Par Nathan Harris

Frissons cybernétiques : Derrière le son et l'image de Black Ice VR

Branchements opérationnels...Black Ice VR est une expérience de réalité virtuelle linéaire,...

25.7.2024 - Par Darren Woodland, Jr.

La Musique de Spacefolk City

Introduction au jeu Spacefolk City est une version accessible, excentrique et absurde d'un jeu de...

12.11.2024 - Par Alex May

Plus d'articles

Réintroduction de Wwise Audio Lab (WAL)

Wwise Audio Lab (WAL) est un environnement 3D open-source semblable à un jeu vidéo, développé avec...

Impacter et Unreal Engine | Comment contrôler le plugiciel Impacter en utilisant la physique du moteur de jeu

Introduction Impacter est un nouveau prototype de plugiciel de modélisation de sons d'impacts pour...

atmoky Ears | Plug-in d'audio spatialisé hyperréaliste pour écouteurs pour Wwise

atmoky Ears est la solution pour créer des expériences en audio spatialisé hyperréalistes aux...