Behind the Sound of 'NO MAN'S SKY': A Q&A with Paul Weir on Procedural Audio

오디오 프로그래밍 / 게임 오디오 / 사운드 디자인

This interview was originally published on A Sound Effect 

What goes into creating the sound for such a vast game as No Man’s Sky? Find out in Anne-Sophie Mongeau’s in-depth talk with this game's Audio Director, Paul Weir.

no-mans-sky-sound.jpg

The game No Man’s Sky was an ambitious project which presented considerable challenges regarding audio, due to both its procedurally generated universe, as well as its style and art. How did those challenges reflect on audio design and implementation?

Paul Weir (PW): From the beginning, I aimed to keep the ambiences as natural as possible, using lots of original recordings of weather effects and nature sounds. It was a sensible decision to use Wwise and drive the ambiences using the state and switch systems. The advantage of this approach is that you can relatively easily construct an expandable infrastructure into which you can add layers of sound design that respond to the game state. 

With a game like No Man’s Sky you need to pass as much information as practical from the game to the audio systems in order to understand the environment and state of play. For example, what planet biome you’re on, what the weather is doing, where you are relative to trees, water or buildings, whether you’re close to a cave or in a cave, underwater, in a vehicle, engaged in combat and so on.

A simple example of how this information can be brought together without additional programmer support is the introduction of interior storm ambience. We have a control value (an RTPC in Wwise terminology) for ‘storminess’ and know whether the player is indoors or out. It was a simple job then to add different audio, such as shakes and creaks, when indoors and a storm is raging, without having to rely on a programmer to add this.

It helps that nearly all of our audio is streamed, so I have few restrictions on the quantity of audio I can incorporate.

I wouldn’t usually use electronic sounds as much as recorded acoustic material, but given the sci-fi nature of the game, a lot of the obviously sci-fi features do use synth sounds, although often combined with real-world mechanical sounds. There’s a certain pride I take in recording unassuming everyday objects and using them for key sounds. For example in the most recent update where we added vehicles, the buggy is my own unglamorous car, recorded using contact microphones, the hovercraft is a combination of a desktop fan and air conditioning unit and the large vehicle sounds come from programmer Dave’s Range Rover, I just dropped a microphone into the engine then we went for a spin around Guildford.

Apart from my usual rule of every sound being original, which I appreciate is in itself pretty dogmatic, I have no set approach as to where the sounds come from. It’s whatever works. 

slbfwlrvqjrgvrvfghrk.jpg

Can you define in a few words the difference between generative and procedural for the readers?

PW: There is no recognised definition for either term, so it’s not possible to definitively describe the difference. For me, generative means it is a randomised process with some rules of logic to control the range of values, it does not need to be interactive. Procedural is different in that it involves real-time synthesis that is live and interactive, controlled by data coming back from the game systems. This differentiation works reasonably well for audio but graphics programmers will no doubt have their own definitions.

How much of the game’s audio is procedurally generated and how would you compare these new innovative techniques to the more common sound design approaches? 

PW: Very little of the audio is procedurally created, only the creature vocals and background fauna. At the moment it’s too expensive and risky to widely use this approach, although there are several tools in development that may help with this. Procedural audio is just one more option amongst more traditional approaches and the best approach as always is to use whatever combination best works for a particular project.

Can you tell us about the generative music system (Pulse) – the goals, what it allows to do, and its strengths compared to other implementation tools?

PW: Pulse, at its heart, is really just a glorified random file player with the ability to control sets of sounds based on gameplay mechanics. We have a concept of an instrument which is an arbitrary collection of sounds, usually variations of a single type of sound. This is placed within a ‘canvas’ and given certain amounts of playback logic, such as how often the sound can play, its pitch, pan and volume information. When these instruments play depends on the logic for each soundscape type, of which there are four general variations consisting of planet, space, wanted and map. So for example when in space, instruments in the ‘higher interest’ area play as you face a planet in your ship or when you’re warping. In the map the music changes depending on whether you’re moving and in your direction of travel.

We currently have 24 sets of soundscapes, so that’s 60 basic soundscapes, plus special cases like the map, space stations, photo mode, etc.

Pulse also makes the implementation of soundscapes relatively simple. Once you drag the wavs into the tool it creates all the Wwise XML data itself and injects it into the project, so you never manually touch anything to do with the soundscapes from Wwise.

kmyailnbyy0afbyqdfxn.jpg

In NMS, how are music and sound effects interacting together? What was your approach towards mixing those 2, and do you have any recommendations on how to mix music and SFX dynamically?

PW: I always mix as I go, the mix process wasn’t as difficult as you might expect and as a PS4 title, we’re mixed to the EBU R128 standard.

Whilst there’s a lot of randomisation in the game, I always know the upper and lower limits of any sound and so over time you reach a reasonably satisfactory equilibrium in the mix. It helps a lot that we don’t have any dialogue. You also have to accept that you’re never going to have a perfect mix with this type of title, so just embrace the chaos.

I do have to be careful with the music though. 65 Day’s of Static like creating sounds with very resonant frequencies so sometimes I use EQ to avoid these from standing out too much. Similarly I’ll take out sounds that are too noise-based as they might sound like a sound effect. On the whole though, 90% of what the 65’ers make goes straight into the game.

What’s your opinion on sourcing any audio from libraries VS creating original content? 

PW: On larger projects I am most irritating in insisting that all of the audio is original and not a single sound is sourced from a library, if at all possible. It does depend largely on the game and practicalities but I’ve been able to do this on No Man’s Sky so far. On smaller projects or where time is of the essence, then obviously it makes sense to dip into libraries. Over the years I’ve amassed a large personal collection of sounds that I’m constantly adding to. 

Can you tell us about the tools you used for NMS’s procedural/synthesised audio, what other software was involved in its creation?

PW: Early in development we used Flowstone to prototype the VocAlien synthesis component. Flowstone has the advantage of being able to export a VST so Sandy White, the programmer behind VocAlien, wrote a simple VST bridge to host plugins in Wwise. For release though it obviously needs to be C++ and cross-compile to PS4 and Windows. VocAlien is not just a synthesiser, it’s several components, including a MIDI control surface and MIDI read/write module.

byull3k2xzfndgivkdlw.jpg

On a more technical point of view, how was audio optimisation handled? Did using procedural audio improve CPU/Memory usage?

PW: VocAlien is very efficient and on average our CPU usage is low. However due to the nature of the game, where we can’t predict the range of creatures or sound emitting objects on a planet, the voice allocation can jump around substantially. We have to use a lot of voice limiting based on distance to constantly prioritise the sounds closest to the player. 

What would you think is the best use of procedural audio? Would it be more adequate for some types of projects or sounds than others?

PW: Procedural audio, according to my suggested definition above, only makes sense if it solves a problem for you that would otherwise be difficult to resolve using conventional sound design. 

It’s still a poor way to create realistic sounds. I’m not generally in favour of using it to create wind or rain effects for example. As a sound designer I find this a very functional approach to sound, ignoring the emotive qualities that natural sound can have. Wind can be cold, gentle, spooky, reassuring. There are complex qualities that we instantly react to with natural sounds, it’s a lot harder to do this with synthetic sound.

Finally, NMS’s audio is of a such greatly varied nature and represents a massive achievement overall, do you have a few favourite sounds in game? 

PW: Thank you, I’ll very gratefully take any compliments. Although it started off quite incidental, I like how we’ve managed to insert so many different flavours of rain into the game. I thanked Sean recently for letting me make SimRain, the game itself is incidental.

What gives me pleasure is knowing the everyday items that make it into the game, such as an electric water pump, vending machine, garage motor. I’ve included some examples of the raw sounds that were used as source material below.

 

Big thanks to Paul Weir for this interview and his insights on procedural audio!

 

 

 

Anne-Sophie Mongeau

Sound Designer

Eidos-Montréal

Anne-Sophie Mongeau

Sound Designer

Eidos-Montréal

Anne-Sophie is currently working as a Sound Designer at Eidos Montreal. Previously, she was a Game Audio Engineer at DIGIT Game Studios in Dublin. She has been working in games since 2012, both designing and integrating sound for independent and AAA titles. Her background in music studies, notably at the University of Edinburgh, have lead her to becoming involved in projects of many different natures, both in linear and interactive media such as short films, documentaries, and interactive audiovisual installations. Driven by a passion for knowledge sharing, Anne-Sophie is regularly hosting practical workshops and masterclasses to university students on how to use tools surrounding game audio.

 @annesoaudio

댓글

댓글 달기

이메일 주소는 공개되지 않습니다.


다른 글

예산 규모가 큰 게임을 작업할 때 미들웨어를 사용하는 것이 왜 중요할까요?

이 글은 고예산 게임을 제작할 때 제가 생각한 Wwise 사용의 장점을 간략하게 설명하고 있습니다. 먼저 프로젝트에서 오디오 미들웨어 사용의 장점을 요약해볼까요. 프로젝트...

18.6.2019 - 작성자: 새미 엘리아 콜롬바노 (Samy Elia Colombano)

Wwise를 사용한 음향 시뮬레이션

최근에 작업한 프로젝트에서 저희 회사는 어느 고객의 미래 사무실 공간 음향 시뮬레이션을 Wwise를 사용해 포로토타입할 기회가 있었습니다. 새로운 건물에서 음향이 어떻게 들릴...

2.10.2019 - 작성자: 에길 샌드펠드 (Egil Sandfeld)

UI 설계 관점에서 UI 오디오 접근하기 - 제 2부

이 글에서는 UI 디자이너*의 관점에서 UI 오디오에 접근하는 방식을 보여드리려고 합니다. 이를 통해 심미적으로나 기능적으로 화합적인 UI를 제작하는 데에 도움이 되었으면 합니다....

20.10.2020 - 작성자: 조셉 마척(JOSEPH MARCHUK)

Wwise에서 Audio Object를 저작하고 프로파일링하는 간단한 9 단계

Wwise에서 새롭게 제공되는 오브젝트 기반 오디오 파이프라인을 둘러보고 싶지만 어디서부터 시작해야 할지 모르시는 분들 계시나요? 그렇다면 Windows용 Wwise에서 Audio...

21.7.2021 - 작성자: 데미안 캐스트바우어 (Damian Kastbauer)

상호작용 음악: '여러분이 직접 선택하는 모험' 스타일의 발라드

2018년 크라우드 펀딩 캠페인을 성공적으로 마친 inXile Entertainment(인엑사일 엔터테인먼트)는 '웨이스트 랜드 3(Wasteland 3)' 게임의 본격적인 제작에...

23.5.2023 - 작성자: Alexander Brandon (알렉산더 브랜드)

에이지 오브 엠파이어 IV의 음악

안녕하세요, 저는 린 가디너(Lin Gardiner)라고 합니다. Relic Entertainment(렐릭 엔터테인먼트)의 수석 오디오 디자이너이자 에이지 오브 엠파이어...

22.1.2025 - 작성자: 린 가디너(Lin Gardiner)

다른 글

예산 규모가 큰 게임을 작업할 때 미들웨어를 사용하는 것이 왜 중요할까요?

이 글은 고예산 게임을 제작할 때 제가 생각한 Wwise 사용의 장점을 간략하게 설명하고 있습니다. 먼저 프로젝트에서 오디오 미들웨어 사용의 장점을 요약해볼까요. 프로젝트...

Wwise를 사용한 음향 시뮬레이션

최근에 작업한 프로젝트에서 저희 회사는 어느 고객의 미래 사무실 공간 음향 시뮬레이션을 Wwise를 사용해 포로토타입할 기회가 있었습니다. 새로운 건물에서 음향이 어떻게 들릴...

UI 설계 관점에서 UI 오디오 접근하기 - 제 2부

이 글에서는 UI 디자이너*의 관점에서 UI 오디오에 접근하는 방식을 보여드리려고 합니다. 이를 통해 심미적으로나 기능적으로 화합적인 UI를 제작하는 데에 도움이 되었으면 합니다....