Using Wwise for Theatre:  Adaptive soundscape for Theatre Play Le Léthé

상호작용 오디오 / 위치 기반 엔터테인먼트 (LBE) / 사운드 디자인

If we imagine that a theater set is comparable to a virtual game world, wouldn’t Wwise succeed in giving life to a stage? In fact, it beautifully did on Le Léthé, and although Audiokinetic’s middleware Wwise was initially designed for games, it may be as powerful for theater sound designers as it is for game audio designers.

Prior to working on Le Léthé, I had experience working with Wwise for games, but absolutely none in theater. This was the occasion to confirm my belief that a game audio mindset and workflow (i.e. granulized, event based approach) can be effectively applied to a theatre play. Beyond the technical side of things, theater and games share some similarities as to where and how action and narration unfold within a particular game or on stage, so it seemed to make sense.

I hope that theater sound designers with no experience in middleware may find this article helpful. From my experience on this project, I believe game audio middleware can enable them to build adaptive soundscapes, music, and playback systems that are rock solid and incredibly easy to use during shows.

Using Wwise fot Theatre - BANNER.jpg

The project

Le Léthé is a play about memory and forgetfulness by play director Philippe Vallepin, adapted from a 2000 book by Dimitris Dimitriadis.

From the beginning of production, the play director knew that audio was going to play an important role, as it would help to pace and structure the text (which contains many inversions and repetitions of words).

I was initially brought on this project to design “sound textures”; but, as we progressed, I was given almost complete freedom over the content, how much sound there would be, and how it would play back.

 

Production Work Method

I worked with video shootings of rehearsals in Reaper. From there I spotted where Wwise Events would be triggered. I then designed and composed a short “static” version of the soundtrack. I mixed the stems in 7.1 with the Ircam’s Spat (each act happens in a different corner of the auditorium). The sound design and music compositions were then granulized and exported into smaller segments and layers. Finally, the real-time designs were made in Wwise and played back with a Soundcaster session.

I would say that one third of the time was spent on gathering field recordings, working with musicians, and recording foleys; another third was composing drafts in Reaper; and the last third was spent in Wwise on the playback system.

 

Approach

The main objective was to make the soundtrack inseparable from the actors’ performance. We also wanted to support the actors without having them worry about synchronization with a complex soundtrack. It had to be subject to the actors’ own pacing during rehearsals while acting as a coating, by structuring, pacing, creating ruptures and smooth transitions, and changing densities.

I made no distinction between foley, music, and field recordings when composing. They are taken as a whole, and every recorded sound served as a potential component. For example, some winds in Le Léthé were designed by manipulating silk clothing and creaking bow hair on a double bass, part of the rain comes from manipulating peas in a bowl, and some of the internal body sounds were made with parquet floor creaks, nylon bags, an accordion, unusual microphone placements, and detuned instruments.

Space & Time Continuum

All scenes in Le Léthé’s original text were taking place in unknown spaces (except one, which was at a bus stop). Additionally, an ambiguous creative element was that it was not clear whether the twelve characters in the play were indeed twelve characters or just one person.

Considering the abstract nature of the text and its characters, I thought that the soundtrack could serve as an ‘anchor’ on which both actors and viewers could hold on to through the show. Audio would act as a thirteenth character, embracing them all, and sometimes acting as a physical extension of their bodies (the question of the body is central in the play), or working as a modular scenography set.

I wanted to avoid the artificiality of having the soundtrack play the first minute of every scene and then fade out when actors speak. Hence, there is sound all along the play (~1h15min). It goes from a barely audible sound support, working as some kind of ‘perfume’ to maintain the sensation of space, to competing with shouting actors at times. This helps viewers bypass the soundtrack naturally, yet if ever one of them would wander off into an inner reverie, the soundtrack was there to support him or her get right into the play.

So, I needed a playback system capable of driving smooth complex transitions that was quickly adaptable during rehearsals and that could keep track of dozens of sounds playing at the same time with automatic and independent actions like fades in/out, delayed triggers, filtering, volume, limited/unlimited number of loops...and so on.

Basically, the Wwise Event system helped me systematize and encapsulate audio behaviors that were always the same (i.e. at this point in the text, this set of sounds stops here, and these start in 10 seconds with a 20 second fade in, those others start right away, etc.), and trigger them all with a single button (Event). It helped me concentrate on more critical elements like syncing with actors, and dynamic sound objects (RTPCs, see 2nd extract below).

This is all very basic game audio design, but it works wonder in a theater play!

 

Detailed extracts of playback designs in Le Léthé

 

ACT V - ‘Shouts and Racket sequence

 

In this extract, each layer of sound has its own fade curve and Action delay linked to Events. Each Event corresponds to a specific section in the text.

In this simple extract there are 32 Actions performed. Some of them play voices (some with infinite looping, others with limited looping, or one-shots), some of them stop voices, or modulate filters each with their own delay, fade in/out time value and curve type.

Yet during the show, there are only a few buttons to play with (7 to trigger Events, 1 to tweak an RTPC). If actors forget their lines, improvise for a minute, or jump to the next page, I was able to follow them with no audio drops. The fact that I am playing with Events and their hidden associated audio logic changes everything. I am not manipulating audio, Wwise does. I am simply following the action on stage and triggering “Events”. 

Fade curves were defined in Wwise; the original stems contained no fades. This had at least two advantages for me. One, each stem could easily be tuned and locked in Wwise during rehearsal, so that they were perfectly performed every time. Two, I didn’t have to take care of how fast I would have moved a group fader on a mixing console; I just had to press a button in Soundcaster that had independent consequences on potentially many sounds.

 

ACT V - Use of Real Time Parameter Control - ‘Body sequence'

In this sequence, actors go through varying intensities and moods. I followed them with this audio design made with a Blend Track and controlled with a MIDI knob. It allowed me to stick to the actors’ performance and create sudden silences, crescendos, and so on.

It is simply a crossfade between different layers of sounds, each with independent modulations, that do not depend on time but on the value of the parameter I tweak.

Other examples of the use of RTPCs in Le Léthé include managing foley textures in a love scene and playing with the intensity of a fire. I used RTPCs to drive complex behavior as well as to ride a simple volume fader or filters on certain individual sounds or groups.

 

ACT III - Ambiances          

Le Léthé contains several sequences with long looping ambiances. For the shorter sequences, each layer has a single simple loop. For the longer ones, I used Random Containers.

I wanted to minimize the sensation of repetition and audio patterns for the audience. To that extent, I designed ambiances in Wwise that were non-repetitive by breaking up each layer of the ambiance into smaller segments. Wwise then picked them up randomly. Again, this is classic game audio practice and it works flawlessly for theater.

The command post: Soundcaster

During the show, I only worked with the Soundcaster Session interface. Its simple interface has really helped me to concentrate on the action on stage. The Soundcaster is so user-friendly that it ensures a stress-free performance for anyone who operates it. Someone with little to no experience with Wwise can take care of the audio during a show; it's unlikely to overwhelm and there are few wrong keys to hit.

Therefore, I can craft a complex soundtrack but do not necessarily have to tour with the theater company since it can easily be handed over to someone else. It can consist of a simple list of sync points with the text and associated Events, to a larger creative freedom with Real Time Parameter Controls to tweak. It becomes up to the sound designers to determine how much room for interpretation, tweakability, and adaptability they would want during performances.

Conclusion

While the use of Wwise requires a significant amount of time to design the playback system and to make sure it is bulletproof and flawless during shows, its environment and tools (Soundcaster sessions, Random Containers, Event system, RTPCs, States, Switches, music hierarchy...) have enabled me to craft a soundtrack that the actors felt confident in. They knew it was reliable and would adapt to their pacing, and not the other way around.

For example, during rehearsals some actors took twice as much time to complete a scene than what I expected, just because they knew the audio system would follow them no matter how fast or how slow they would speak.

I believe that tools like Wwise might change and facilitate the way we approach theater live sound with interactive workflows. They enable sound designers to create adaptive soundscapes and free actors from inflexible synchronization.

 

 

Pierre-Marie Blind

Sound Designer

Ubisoft Blue Byte

Pierre-Marie Blind

Sound Designer

Ubisoft Blue Byte

Pierre-Marie is a sound designer currently based in Düsseldorf, Germany, working at Ubisoft Blue Byte. Involved in games, virtual reality projects, theater plays and contemporary arts, he has been an assistant to electroacoustic composer Cécile Le Prado since 2015. His background in both classical music and industrial sound design studies has profoundly influenced the way he approaches sound. Anytime a slight breeze rises you will find him outside looking for new wind textures, or swooshing broken tree branches around as his lust for foley and field recording never leaves him.

 @PierreM_Blind

댓글

Davide Castagnone

September 13, 2017 at 09:48 am

Very interesting article, congratulations. Thanks and good work!

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

게임 오디오 직업 스킬 - 게임 사운드 디자이너로 고용되는 법

20.1.2021 - 작성자: 브라이언 슈밋(BRIAN SCHMIDT)

고전적 잔향 방법의 몰입적 잠재성 살펴보기

이전 글인 VR에서 몰입형 잔향의 도전 과제에서는 가상 현실에서 몰입형 잔향을 성취하기가 힘든 이유를 알아보았습니다. 이 시리즈에서는 과거, 현재, 그리고 새로운 잔향 기술을...

23.2.2021 - 작성자: 브누아 알라리 (BENOIT ALARY)

가상 음향을 통해 소리 풍경 가청화하기

이 시리즈에서는 과거, 현재, 그리고 새로운 잔향 기술을 집중적으로 살펴보고 몰입적 공간적 관점에서 이 기술을 검토해봅니다. 이전 글에서는 가상 현실에서 몰입적인 잔향을 제작하는...

16.3.2021 - 작성자: 브누아 알라리 (BENOIT ALARY)

Wwise에서 음악 설계 템플릿을 사용하여 독보적인 사운드 디자인을 만들어내는 방법

Foxface Rabbitfish(폭스페이스 레빗피시)의 작곡가 겸 오디오 디렉터인 가이 휘트모어(Guy Whitmore)는 2019 Wwise 상호작용 음악 심포지움에서 현대...

24.11.2021 - 작성자: 가이 휘트모어 (Guy Whitmore)

NFL 킥오프 2020: 텅 빈 경기장에 관중 사운드 시스템 도입

실제 팀과 경기장별 오디오 파일을 사용하는 동적 시스템 이 글은 Sports Video Group News(스포츠 비디오 그룹 뉴스)에 게시된 원본 글을 가져온 것입니다. 이번...

1.12.2021 - 작성자: 댄 대일리 (Dan Daley)

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 1부: 환경음과 보이스오버

'텔 미 와이(Tell Me Why)'는 DONTNOD(돈노드)가 개발하고 Xbox Games Studios(엑스박스 게임 스튜디오)가 출판한 싱글 플레이어 서사적 어드벤처...

4.5.2022 - 작성자: 루이 마르탱 (Louis Martin)

다른 글

게임 오디오 직업 스킬 - 게임 사운드 디자이너로 고용되는 법

고전적 잔향 방법의 몰입적 잠재성 살펴보기

이전 글인 VR에서 몰입형 잔향의 도전 과제에서는 가상 현실에서 몰입형 잔향을 성취하기가 힘든 이유를 알아보았습니다. 이 시리즈에서는 과거, 현재, 그리고 새로운 잔향 기술을...

가상 음향을 통해 소리 풍경 가청화하기

이 시리즈에서는 과거, 현재, 그리고 새로운 잔향 기술을 집중적으로 살펴보고 몰입적 공간적 관점에서 이 기술을 검토해봅니다. 이전 글에서는 가상 현실에서 몰입적인 잔향을 제작하는...