After setting the game world abuzz at GDC and winning multiple awards at the Independent Games Festival, Denmark based independent platform developer Playdead released their highly anticipated LIMBO game in the summer of 2010, on XBLA.
With a small team of 14 (at peak) and three years of development, Playdead were able to create a one-of-a-kind puzzle game about a nameless boy searching for his lost sister in LIMBO, a highly artistic, yet strikingly eerie black and white world. We sat down with Martin Stig Andersen, the composer and sound designer on LIMBO, to find out more about this unique project, how the audio came to life, and what it was like using Audiokinetic’s Wwise.
Background
What is your work background?
My background is actually quite diverse. I started my professional career a decade ago doing contemporary instrumental composition. I then moved on to electroacoustic composition working within various disciplines including acousmatic music, interactive music, sound installation, and electroacoustic performance. More recently, I’ve been exploring how electroacoustic music can contribute to audiovisual work, playing the role of sound design and music at the same time.
How did you get started in the game industry?
It was actually LIMBO that made me venture into game audio. When I first came across the original concept trailer, I immediately spotted several qualities in the visuals that I could relate to my own compositional work. There was so much ambiguity in the blurred, black & white artistic style that I began to see the visuals as a canvas for each players own associations and interpretations. Fortunately, the game director spotted corresponding qualities in my musical work, and I eventually got the chance to pitch my ideas for the project.
What are your primary responsibilities?
On LIMBO, my job was to articulate the overall sound world through sound recording, processing, and implementation. By dividing up the tasks, as recommended in the Wwise SDK, we were able to establish a workflow that enabled us to get the most out of Wwise. I worked closely with LIMBO’s game director, Arnt Jensen, to define the particular needs for the audio, and thanks to the willingness and support of the development team, the final result ended up being equally complex and compelling. What does sophisticated audio bring to game production? By making game audio sophisticated, we’ve promoted sound from being a mere mechanical, passive passenger to an active contributor to the overall game experience. While many game developers remain focused on enhancing realism in games, I believe the future of game audio lies in the articulation and support of other, more abstract aspects of video games, such as drama, narratives and subjective points of views.
Rather than being passively hooked up to game objects in the game world, audio needs to be dynamic. This means that sounds associated with different objects are balanced not only based on the positioning of the objects within the game geometry, but also on more abstract, dramaturgical information derived from the game.
LIMBO
Can you describe LIMBO to us?
LIMBO is an artistic 2D black & white platform puzzle game about a nameless boy searching for his lost sister in LIMBO. LIMBO is the result of a unique approach to game development in which creative decisions are made on the basis of experimentation and perceptual experience with the game environment rather than abstractly through script writing. The overall atmosphere is dark, evocative and sinister, with an unsettling uncertainty that comes directly from the ambiguous audiovisuals and lack of explicit story line. What were your ideas for the audio on LIMBO? Since LIMBO doesn’t look like a regular video game, we didn’t want the audio to sound game-like either. The visuals appear very retro, not in a video game sense, but more in terms of film. So when it came time to start working on the audio, it became obvious to us to look for inspiration in the soundtracks of old films. We adopted several qualities, including the frequent use of nearsilence, and the lack of audiovisual correspondence, all bound up within a lively, homogeneous yet immensely dull and muffled sound texture. As with LIMBO’s visuals, the lack of sound clarity and ambiguity in the relationship between the audio and visual elements leaves the door completely open for the players own associations and interpretations.
To give the audio a more contemporary and spacious feel (after all we’re not living in the 30’s), the dull, monophonic sounds were juxtaposed with temporally smeared, stereophonic versions of the very same sounds, giving way to a rather delusive sound space. We really wanted to degrade the sounds so that they matched the reductive quality of the silhouette world. This was achieved by running the sounds through antique wire recorders and other obsolete equipment. The spacious shadowing, on the other hand, was created by applying fast Fourier transform and convolution processing to the original distorted sounds. This created a completely new range of, smeared, stretched, and swelled sounds that organically weave throughout the soundscape. On a larger scale, we wanted the sound to help focus the attention of the player by letting only selected objects make sound at any given instant. This technique is somewhat representative of how old films were mixed. Back in the day, only a limited amount of audio tracks were available to the sound designer, so they were forced to decide at every instance what objects would trigger a sound.
How did using Wwise help you to achieve your creative vision?
The versatility of Wwise allowed me to take a rather generative approach to sound structuring. For example, rather than simply looping background noise and other recorded ambiances, I was able to decompose each into small fragments of various kinds, and then regenerate them in Wwise. In other words, by creating a structure of blend, random, and sequence containers in which trigger-rates, volume, filters, and pitch were randomized, the original ambience sound re-emerged in an ever-varying, adaptive guise. Because our game was so unique, I didn’t want to just compose linear music in a sequencer and then attempt to squeeze it into the otherwise non-linear environment of the game. With Wwise, I was able to create complex structures of event actions, switches, and RTPC assignments that dynamically weaved together musical fragments based on incoming events from the game. The ability to apply states to various properties in Wwise made it possible to create the dynamic mix that reacted to all kinds of information from the game.
What were the main challenges on this title and how did using Wwise help you to overcome these?
In order to establish a rather silent environment, we decided that the sound of the protagonist would be relatively loud. As a result of this decision, it became crucial that we have a wide variety of footstep sounds, as the player would hear them throughout the game. Besides creating sound bundles for more than a dozen different ground materials, we separated each footstep sample into a “heel” and “toes” component. This allowed us to create more variation, as instances from each respective category could be combined randomly. In Wwise, we were able to easily manage the gradual change from walk to run using a trigger rate transition between “heel” and “toes” that would decrease as the velocity of the protagonist increases. RTPCs came in very handy when dealing with the change in intensity of the protagonist’s footsteps. The audio developer of the team provided me with several RTPCs related directly to the protagonist and I simply had to map them to the properties of the footstep sounds. For example, there’s an RTPC representing the “exhaust level” of the protagonist that allows us to gradually attenuate the footstep sounds after the boy has been running for a specific amount of time. When the boy’s “exhaust level” returns to normal, the footstep sounds automatically return to their original settings.
Besides these global assignments, I also took full advantage of Wwise’s “State” functionality to change the volume of the footsteps in accordance with specific environments or even psychological states. Finally, I used the attenuation editor to change the volume, reverb, and LPF dynamically based on the boy’s distance from the camera. Considering that the sound of footsteps corresponds to more than 10 properties, each of which can attenuate the sound by 1.5 dB or more, you end up with a possible variation of more than 15 dB. The result is a continuous variation of the footstep sounds, generally providing a relatively loud point of reference in regards to the overall level balancing yet without becoming annoying to the player.
Which audio aspect are you the most proud of?
I am really proud of how we were able to create a completely unified sound environment, where elements take on the role of both music and sound design at the same time. There is no separate music track per se in LIMBO, which means players won’t be able to alter the mix between “sound effects” and “background music”. It will be more like a movie experience, where you aren’t able to alter the mix of a film. Game players may not be used to this approach, but it works extremely well in LIMBO and is exactly what we were aiming for.
Using Wwise
How did you find working with Wwise?
Aside from the impressive amount of features, Wwise is incredibly stable. The software is very responsive, and I experienced considerably few anomalies over the course of the entire development period. In game development, issues can arise from any number of sources and it is easy to get overwhelmed. It is essential that the software you are using be stable and Wwise certainly delivers on that front.
Can you describe your learning process for Wwise?
I already have experience with other non-linear environments such as Max/MSP, so it was relatively easy getting up and running with Wwise. In order to get an overall picture of the possibilities, I started off by reading through the manual, watching the tutorials, and studying the Wwise sample project. This upfront investment really helped me to avoid making unnecessary workarounds caused by a lack of knowledge of the features available. While it might seem daunting to get started with Wwise, it is actually very straight forward, and before you know it, you are building very sophisticated projects.
What are your favorite features?
I have a few, but Wwise’s ability to connect to the game, allowing me to tweak the audio while the game is being played, is so powerful. I also really like being able to apply states to a variety of functions.
What are the main benefits to using Wwise?
Wwise provides a bridge between the coding and sound design worlds, establishing a common ground for those working within their respective fields. This common ground ultimately enhances the outcome of their individual efforts. Many of the concepts in Wwise should already be familiar to most sound designers. Wwise does introduce a few new concepts though, but these new concepts only help the sound designer to better understand the nature of games, and get the most out of their game audio design. The abundance of features in Wwise makes for a complete and versatile game audio solution that allows us to realize our audio-visions.
Can you talk about your experience with the Support from Audiokinetic?
Audiokinetic’s support team is very responsive. In game development, you need a quick turnaround on issues in order to keep moving forward with your work. Audiokinetic’s support did not disappoint me, as they would regularly respond to my issues within the same day.
댓글