Setting up Wwise for a Spatial Live Electronic Performance

상호작용 음악 / 위치 기반 엔터테인먼트 (LBE)

This article was originally published on gabrielgallardoalarcon.com. 

A short jam with the system running. The audio in the videos has been decoded from ambisonics to binaural. The use of headphones is highly encouraged.

In this blog entry I will explain how I tried to apply the Ambisonic audio workflow with Unreal Engine for a school project, how I failed, and how I used that failure as inspiration for trying out something entirely different.

Initial thoughts and exploration

I originally envisioned a generative sound piece that would control the movement of visual objects in Unreal Engine, all in real-time. These visual objects would move in a 3D scene and emit audio. The audio would then be encoded to ambisonics and then decoded using IEM plugins in REAPER, to then finally reach the output through the multi-speaker setup at the Audiovisual Media laboratory (AVM) at my University. My main intention was to get familiar with both Unreal Engine 5 in general, its new DSP audio rendering graph called MetaSounds, and the possibilities of spatial audio for live performances of experimental music. Very soon I encountered roadblocks. Playback of ambisonic recordings in Unreal is pretty straightforward, but it seems that the playback of audio objects encoded into ambisonics outside of the engine is not. 

After spending several days researching and asking in numerous forums without an answer, I decided to implement Wwise into the project and try something slightly different. I’ve used Wwise and ambisonics successfully in the past, but I didn’t want to rely on it because then I would need to use samples for my composition, while what I really wanted was to try out MetaSounds' synthesis capabilities. Nevertheless, Wwise offers a wide range of possibilities for open-ended composition and is capable of managing ambisonics up to 5th order. My first tests were successful. Thanks to REAPER’s ReaRoute and its ability to transmit up to 16 channels of audio between applications, I was able to transmit a third-order ambisonic stream for decoding: 

img1

3rd order Ambisonic Bus in Wwise

img2

Using ReaRoute as Output

img3

Receiving the Ambisonic stream inside REAPER and distributing it for decoding.

img4

I used IEM’s AIIRA Decoder for outputting ambisonics into the AVM’s custom speaker setup..

Unfortunately, a new roadblock appeared. While Wwise works fine when transmitting spatial audio to external applications by itself, the situation changes when the engine works alongside UE5. Every time I tried to test the audio with Unreal, my Master bus in Wwise stopped recognizing ReaRoute and reverted to stereo.

AudioLink is a new protocol in Unreal 5.1 that allows for sounds from Unreal Audio (including MetaSounds) to be mixed by another audio engine, such as Wwise. This functionality did not exist when this article was initially published.

img5

img6

Audio reverting to stereo when using Wwise alongside Unreal, and ReaRoute disappearing as an endpoint option.

After searching once again through forums and QA posts, I found this thread that appeared to have similar issues to mine. The thread was never officially answered, but the author followed up with a solution that involved engine code modifications in Wwise. I am not a programmer, but I tried to follow anyway, unsurprisingly without success. I also sent a message to Michael G. Wagner, Professor and Department Head of Digital Media at the Antoinette Westphal College of Media Arts & Design, who I follow because of his very useful Spatial Audio tutorials on YouTube. He replied that it is possible to write your own Audio Device and make it work with Wwise by creating a script. I realized that the difficulty and scope of the project were way over my time constraints, so I abandoned it.

Exploring Wwise as a tool for live electronic performance

After these setbacks, I reconsidered my approach. I was not gonna be able to complete the project if I kept trying to make unfamiliar tools work the way I imagined. At the heart of my goal was the exploration of immersive audio and interactivity. I decided to maintain that core while reevaluating the execution and think more about the possibilities of the tools I already had working (Wwise, REAPER, and IEM plugins). 

Wwise works by deciding when and how sound events should be played back by grouping sound files into containers that have certain specific characteristics (e.g. a “Random Container” will choose a sound or container from all the sounds that are contained in it at random, according to certain rules that the user decides; a “Blend Container” will crossfade between the sounds or containers as specified by an external control parameter, etc). These containers can then be grouped into “Events” that define the final logic and behavior of when and how these containers are chosen to be played by the system. Usually, the game’s code decides when to fire these events, and an analogy to live performance can be easily drawn: if the game’s code is the performer, Wwise is the launchpad.

Fortunately for us, Wwise offers the possibility of mapping MIDI messages to events inside of the authoring tool. Unfortunately, the mapping method is very cumbersome when compared to what you can find in other tools that have been designed with live performance in mind (e.g. Ableton Live). Wwise’s MIDI mapping is meant for testing, so the sound designer can easily and freely fire many events at a time without having to execute game code first. This works well when launching simple one-shot events, but doesn’t let us map (or at least I haven’t found a way) other important features that Wwise offers like switch and state changes, or real-time 3D positioning and attenuation simulation. Another disadvantage in comparison with traditional music performance software is the lack of a global clock. BPM can be taken into consideration while designing sound behaviors inside of what is called the “Interactive Music Hierarchy”. While I didn’t actually try to map MIDI to BPM in the Music Hierarchy, I always considered the Music Hierarchy limiting, since the tools there focus on the treatment of long pre-rendered audio, and I like the more granular approach of the Actor-Mixer Hierarchy. Clock, positioning and switches were all fundamental operations, so I had to find workarounds.

Sending MIDI

img7

A custom configuration of my MIDI device

After selecting, editing, and implementing my samples in Wwise, I started creating events without logic, and I populated my containers. The behavior of these was in constant iteration and required fine-tuning of settings until the very end of the process. Creating the events first allowed me to start mapping them to MIDI messages from the beginning and do fast testing and iteration while trying out my container behaviors.

img8

Some of the structures of the sample instruments I created. They ended up using several levels of inheritance.

The next step was to hook up my MIDI controller and configure it so it could work as intended with my behavior structure. I chose to work with my Akai MPK Mini for two reasons: first, it offers the ability to customize the entire layout to choose what messages each knob and pad sends individually; and second, it has its own clock that allows for a variety of different arpeggiator modes and speeds. The fact that I was able to tap tempo and change subdivisions on the fly was very important to give my sequencer musicality and variability. 

img9

MIDI mapping in Wwise

After configuring my controller, I started mapping the events I created to different MIDI notes as I found it more comfortable to play.

Selecting musical scales

The next step was to work on the behavior of my pitch content. Originally I planned to make everything chromatic but then I considered the possibility of having two modes: one chromatic and one diatonic, for more variation. This would require the programming of two different states, which as mentioned before were not accessible for MIDI mapping. The solution was to use the Blend Container and crossfade between two Random Containers of notes (one containing all 12 tones of the scale, one containing only those of the C major key), and a Real-Time Parameter Control to go from one container to the other. Since RTPCs do offer the option to be mapped to a specific CC value range, I was able to control this feature easily with a knob.

img10

In this example, the Bass voice of the choir can switch between scale modes thanks to a Blend Container.

The challenge of spatialization

Despite its powerful spatialization engine, Wwise is dependent on a game engine to receive coordinates about the location of sound emitters in space. When no game engine is connected to Wwise, the user can still simulate positioning by using 3D panners or attenuation previews. Although useful for testing and debugging, these tools can only be manipulated via mouse, and as far as I know, no MIDI mapping is offered. This limitation makes real-time spatialization for my specific purposes really challenging, so I needed to find workarounds.

img11

Unfortunately, I didn’t find a way of mapping panning previews to MIDI CC inside of Wwise authoring.

My initial approach was to set my sources to specific points in space as starting points and then program certain default movements. In Wwise, this can be achieved by making use of the “3D Position” section in the “Positioning” tab of a sound or container. From there, when “Emitter with Automation” is selected, a new window can be opened. This window offers the possibility of predefining positions in a simulated 3D space and by defining points in space and time (in a similar fashion as how keyframes work in animation) it is possible to draw a path for the sound to navigate through in a defined timeframe.

img12

The “Positioning” tab in Wwise

img13

The initial position in space of my sources. I organized them with the layout of the AVM in mind.

img14

A spatial automation path for a sound in Wwise

This method offered me a way of spatializing sounds in a more creative way, but I still didn’t like the lack of control. I expected that having my sources moving all the time in predefined paths would end up cheapening the whole experience very fast. To try to counteract this problem I decided to use the same Blend Container method as before and created another RTPC that I called “Spatial Mode”. This parameter acted as a sort of “on/off” switch for the moving paths, so I could at least decide when to start and stop the movement of sources through space. I achieved this by populating the container with two versions of each sub-container: one that only had initial positioning data (indicated by the suffix “Sectional”) and another that contained movement automation paths (suffix “spread”).

img15

My second RTPC blend control for turning on or off

Exploring spatial granularity

Building upon the work done until then,  I adventured a bit further and tried to figure out less obvious solutions for spatialization. The path automation window lets you store up to 64 paths in a list. Every time an event calls the object that contains the automation, a new single path in the list is chosen, and the user can decide if the event cycles through the list sequentially or at random. This gave me the idea to explore the possibilities of spatial “granularity”. I wondered if an illusion of fluent movement in space could be made out of many static positions being fired rapidly. I decided to test out this with percussion sounds because of their brevity. I filled up the list of some of my percussion samples with static positions that together made for individual steps of simple shapes like circles and 8-figures, or that moved in parables through the X and Y axis. The clock of my MIDI controller is capable of firing up to 1/32T notes at 280 BPM. The results were positive and I was able to control the speed of the movement by changing the clock of my arpeggiator or by manually firing MIDI notes. This capability gave my set more unpredictability and dynamism, and although admittedly limited, it allowed me to have some agency over the spatial component of the performance. I would certainly like to make more experiments with this and other concepts in the future since I believe I only touched the surface of what Wwise could offer for performative spatialization.

img16

These three pictures show three different static path points on my kick sample, which moves only circa 6º at a time in a circle each time the “playOneShot_drum01” event is fired by MIDI.

Effect controls 

After deciding on the spatial behavior, I started working on effect groups. Since I was running out of time, I didn’t go too deep, but I managed to map two effects of my drum group to knobs in my MIDI controller: Low-pass filtering and Pitch transposition. I was especially surprised about how well Wwise manages pitch shifting in real-time,  in a range of +/- 24 semitones. I believe both parameters gave my sequencer an extra layer of expression, but I would like to explore more in the future.

img17

The four parameters I mapped to my MIDI controller.

Conclusions

This project taught me many lessons about spatial audio, but also live electronic performance. Ambisonic audio in Wwise worked very well for this specific setup but I think it would be nice to see the possibility of using VST plugins in audio buses in the future. I would prefer to decode the Ambisonic stream into a custom speaker setup directly inside of Wwise, instead of sending that stream to REAPER to be then decoded with IEM or Sparta plugins. As much as I love REAPER, adding extra software only for playback adds a level of complexity that could easily be avoided if Wwise worked with third-party plugins as a regular DAW. This is especially annoying in Windows, where specialized (and often temperamental) third-party software is required to stream audio from application to application. 

On the other hand, I was pleasantly surprised with Wwise and how it works with MIDI input in its authoring environment and I see great potential for creative uses that go beyond game audio implementation. That being said, I believe this potential is cut short by design. I had experience with Wwise before, mainly working on small game projects, and this surely helped me to find alternative solutions to many cumbersome problems. As the popularity of the tool (and the relevancy of interactive audio in general) increases, I hope the team at Audiokinetic realizes the potential and starts to offer more functionalities and better workflows for other types of interactive audio experiences. This exercise showed me that there are a multitude of things that Wwise does much better than established tools like Ableton Live or Logic Pro, but until better workflows are not implemented, that potential will never be realized.

Note: The reader can watch and listen to more examples of the sequencer in action by following this link. The audio in the videos has been decoded from ambisonics to binaural. The use of headphones is highly encouraged.

Gabriel Gallardo-Alarcon

Sound and Music Designer

Freelance

Gabriel Gallardo-Alarcon

Sound and Music Designer

Freelance

Gabriel started teaching himself about game audio in 2015. By 2018, he released several indie projects in his native Bolivia with the first generation of game developers from that country that later founded the Bolivian Videogame Association (or ABV, from its Spanish acronym). He is also a former lecturer for Game Audio in the first accredited game development education program in Bolivia. Thanks to a scholarship, Gabriel moved to Germany in 2019 to pursue a Bachelor's degree in Audio Design. Today he works as a freelance sound and music designer for indie teams in both South America and Europe.

gabrielgallardoalarcon.com

YouTube

Instagram

 @gab_gallard

댓글

Simon Pressey

February 15, 2023 at 10:30 am

Wwise does decode ambisonics in real-time/runtime and you can use the Wwise ASIO output plugin map to a multichannel speaker configuration that you can specify that matches your speaker setup.

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

Mystralia의 마법적이고 역동적인 음악 사운드스케이프 만들기

Mages of Mystralia는 주인공 지아(Zia)가 마법의 기술을 배우는 매력적이고 다채로운 액션 어드벤처 게임입니다. Borealys Games의 작곡자이자 사운드...

23.6.2020 - 작성자: 안토이네 바숀(ANTOINE VACHON)

동적 음악 설계에 관하여 - 제 1부: 설계 분류하기

설계 계기 저는 2015년에 오디오 게임 엔지니어로서 처음 일을 하게 되면서 그 당시 저의 아트 디렉터를 통해 Wwise를 접하게 되었습니다. 그전에 저는 게임 음악을 작곡하는...

7.10.2020 - 작성자: 천종 호우 (Chenzhong Hou)

Wwise로 현실 세계의 상호작용 음악 만들기

저는 몇 년 전부터 상호작용 오디오를 좀 더 깊게 탐구해보기로 했습니다. 제가 하는 작업과 연관된 프로젝트로 만들되, 지루하지 않게 흥미로우면서도 배울 것이 있는 프로젝트를...

24.8.2021 - 작성자: 리사 슈바르츠발트 (Ressa Schwarzwald)

3D 상호작용 음악 경험에 Wwise를 사용해야 하는 이유

‘Dirty Laundry by Blake Ruby (더러운 빨래 - 블레이크 루비)’ 모바일 VR 앱: 오디오 소개....

2.9.2021 - 작성자: 줄리안 메시나 (Julian Messina)

게임 음악은 단순히 그냥 음악이 아니다: 제 1부

게임 음악이란 무엇일까요? 상호작용 음악이란 무엇일까요? 이 질문에 답하기란 생각만큼 그리 간단하지 않습니다. 올리비에 더리비에르(Olivier Derivière)는 이 글을 통해...

20.10.2021 - 작성자: 올리비에 더리비에르 (OLIVIER DERIVIÈRE)

위치 기반 엔터테인먼트, 불규칙적 스피커 구성과 Wwise

Wwise는 오디오 채널의 개수와 스피커 배열이 기존의 소비자 채널 구성을 따르는 콘솔과 PC 게임 개발에 사용하는 것으로 잘 알려져 있습니다. 하지만 Wwise가 빛을 발하는...

16.11.2021 - 작성자: 숀 랩티스트 (Shawn Laptiste)

다른 글

Mystralia의 마법적이고 역동적인 음악 사운드스케이프 만들기

Mages of Mystralia는 주인공 지아(Zia)가 마법의 기술을 배우는 매력적이고 다채로운 액션 어드벤처 게임입니다. Borealys Games의 작곡자이자 사운드...

동적 음악 설계에 관하여 - 제 1부: 설계 분류하기

설계 계기 저는 2015년에 오디오 게임 엔지니어로서 처음 일을 하게 되면서 그 당시 저의 아트 디렉터를 통해 Wwise를 접하게 되었습니다. 그전에 저는 게임 음악을 작곡하는...

Wwise로 현실 세계의 상호작용 음악 만들기

저는 몇 년 전부터 상호작용 오디오를 좀 더 깊게 탐구해보기로 했습니다. 제가 하는 작업과 연관된 프로젝트로 만들되, 지루하지 않게 흥미로우면서도 배울 것이 있는 프로젝트를...