Practical Implementation: Leveraging Wwise to Replicate a Real Radio in Saints Row (2022)

게임 오디오

When initially designing systems in pre-production for Saints Row (2022), the audio team decided to let the real world dictate how a number of our systems would operate. This was mainly decided simply because of the structure of the team at large—we have practical developers applying real-world skills in our proprietary engine. Our physics programmer used to make flight simulators, our vehicle designer was an engineer, etc… so they would build their systems around what they knew. We decided to leverage this; all those systems already worked in a practical way, let’s just stick with what works.

For example, our vehicle engine is set up just like a real vehicle; we get an actual RPM value, we have gear ratios, throttle, suspension, and even tire slip to consider, not just ignition/shutdown with a velocity RTPC (though we did have those available too). We even have an RTPC for whether a boat’s propeller is submerged or not.

On top of that, vehicle collisions are registered on each vehicle depending on the type of collision. If I’m driving full-on and slam into an NPC vehicle perpendicular to my vehicle, my car plays a head-on collision, and the NPC plays a T-bone sound. With the limited number of assets we used, we were able to get the permutation on general vehicle collision sounds into the tens of millions before even counting the random vehicle parts that could fall off or spray fluid as a sweetener layer.

We applied the same theory to prop impacts; if a large hollow wooden barrel hits a metal pole, it sounds exactly as described. In addition, one of my favorite features (which I’m sure nobody consciously noticed) is that bullet impacts and explosions travel at the speed of sound using the Initial Delay feature.

img1

This theory was held across the entire project, with every audio system considering any real-life practical implementation that we could think of before attempting to get fancy. That’s when things got spicy—it was time to design the radio system.

Saints Row Radio—A Brief History

Designing the radio system was a perfect storm of experience and timing. Before joining Volition, I cut my teeth in the cutthroat rat race of medium market radio, starting in promotions before designing commercials and some engineering before joining the airstaff. I knew how radio worked from tip to toe, and designing the Saints Row radio was the first project for which I had full ownership of audio systems from the ground level. 

Previous SR titles had a complex network of background clocks, timers, and seek tables, with hundreds of table files all talking to each other every frame. Under the hood, it was a spiderweb that would fall apart if the spider sneezed. On the player’s side, there was some cross-pollination between individual car radios playing the same station, some syncing bugs, etc., but it was radio enough to be called radio.

I thought we could do better, considered the world-first approach, cracked my knuckles, and called our audio programmer, Mike Naumov, into my office.

The idea was simple: Radio is radio. There are transmitters in the world and receivers roaming around. All we had to do was connect the two and set up playlist logic that the radio elements could follow.

The Transmitter

Before we could get anyone tuned in, we had to get something playing. We placed an object under the origin of the game world and played a Random Container of old-school country APM tracks just to get started. This would eventually evolve into the Tumbleweed station. 

img2

Making sure we would stay in sync using virtual voice settings was my primary task. In the meantime,  Mike got to work on the radio dial functionality, so we added another object with another container so we could start testing switching between stations.

The Receiver

Now, we just had to figure out how to actually hear the radio when you turn it on. Initially, we tried simply setting an RTPC to turn a station 2D when selected while keeping all the others 3D using the Speaker Panning/3D Spatialization Mix feature. But we were concerned NPC cars would also tune into the same station and double up the music. The solution ended up being as simple as assigning a PC/NPC RTPC to the player-owned receiver to do basically the same thing.

We now had a functional radio component that could be placed on all vehicles, distinguish the player character’s radio from NPC radios, and functionally change between stations. 

Through a proprietary multi-position emitter tool (similar to AK’s Set Multiple Positions node in Unreal), we were able to dynamically attach emitting positions to each radio component without creating new game objects, allowing multiple vehicles to be tuned into the same station while moving, without messing with our voice or object count.

img3

img4

The Stations

Now that we had a way to transmit and a way to listen, it was time to start building the stations. From my experience using Scott Studio back in the day to set up and run real-world radio playlists, I figured we’d just recreate that functionality in the guts of our own radio system.

Here’s a Googled screenshot of Scott Studios SS32 radio automation software interface, circa 2006. 

img5v3

(Source: https://www.devabroadcast.com/dld.php?r=759)

This effectively lets the station programmer predefine a sequence, including commercial breaks, news breaks, DJ talkover, weather, station IDs, all broken down by individual elements, letting the airstaff run the automated playlist or hijack it for timing or requests. 

To achieve this, we created a modular system comprised of the same “elements” system, which could be slotted in a predefined order by the designer so we wouldn’t get too many commercials in a row, or a song with an outro followed immediately by an intro to the next song, or a station ID in between two commercials, etc. This modularity also opened the door for a custom player-created playlist as a bonus.

As you can see in this screenshot from our table editor, there are newscasts sprinkled in between any non-verbal element. This is due to the system playing a newscast in the next available slot if and only if a newscast was unlocked by completing a piece of gameplay. This is because the newscast will discuss the player’s actions. It helped tremendously that Mike’s time was shared with the progression team, making that functionality a breeze to set up.

img6

The Songs

Having planned the entire system around several Interactive Music features so far, this is where we really started to leverage Wwise. As each element was modular, all we needed was a play Event for each one. These are all paired off in a separate table class, so the code made all the selections—each song is simply a Switch Container with each of the flavors nested underneath as a sequence:

img7

img8

When a flavor is selected, a Switch is set for that station’s object and the play Event is posted for that song’s Switch Container. We could then set the Exit cues to time out the transitions, allowing each DJ to hit the post perfectly (i.e., stop talking on a specific beat, a primary goal/flex for all on-air personalities):

img9

That Exit cue would line up to fire off the song so the DJ will always hit the post, indicated by the playhead below:

img10

Similar attention was paid to the outro; when the music starts to wind down, we fire off an Exit cue and let the DJ start to talk. We also used sidechain compression on the DJ voice bus to allow them to punch through the music if it was a little fuller than the jockey.

img11

In addition to the Exit cue, we had to be able to tell the radio system the element was finished, not just the individual track. To do this, we also placed a custom cue with a specific label so that the code could listen for an AK callback; when triggered it tells the system to select and play the next element. This significantly streamlined the engine-side processing and allowed a very natural crossfade between elements, just like a real radio DJ could manually force the next element to play or predetermine a crossfade duration. All elements also had the same custom cue to exit, so the code only had to listen for a single callback each time it cycled. 

In Conclusion…

When all this comes together, it feels like the most realistic radio experience I’ve had to date. Two cars passing while listening to the same station are completely in sync, you can hijack a car playing a song you like, and it immediately strips LPF and volume offsets at the exact same place in the song. You can toggle through all the stations and land back at the initial song as if it kept broadcasting over the airwaves without considering your individual actions…because it actually did.

Without the perfect collision of Wwise’s capabilities, my exact position on the project at the time, my experience working in this exact field in the real world, and especially Mike’s incredible work and willingness to rewrite a paradigm, we’d probably still be running clocks and seek-playing songs at incredible cost to our CPU budget… with the bonus of squashing all the bugs from the old system and making radio as modular as possible for possible fan modding in the future. It was overall an extremely pleasant and satisfying experience that we’re very proud of. 

Brendon Ellis

Senior Technical Audio Designer

Volition

Brendon Ellis

Senior Technical Audio Designer

Volition

Brendon Ellis started testing games in 2007 seeking shelter from the cutthroat rat race of the medium-market radio industry as a line tester, moved into audio testing, then bug fixing, and then a sound designer before eventually becoming Volition’s senior technical audio designer.

Discord: poor-old-goat#8203

MobyGames

 @The_Pie_Maker

댓글

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

AI를 활용한 Pagan Online의 다이얼로그(대화) 관리 개선

오디오 프로그래밍 / 게임 오디오 / 사운드 디자인 니콜라 루키치 (NIKOLA LUKIĆ) 우리는 인공지능의 연구개발이 상당한 추진력을 얻고 있는 시대에 살고 있습니다....

24.3.2020 - 작성자: 니콜라 루키치 (NIKOLA LUKIĆ)

Wwise Unity 커닝 페이퍼

Wwise Unity 통합에 대해 말해봅시다. 언제든지 참조할 수 있는 수년간 제작된 교육 자료가 꽤나 많습니다. Audiokinetic 교육 자료로 말하자면 Youtube에도...

2.2.2021 - 작성자: 매스 마라티 소노로 (MADS MARETTY SØNDERUP)

게임 사운드 보관 | 제 1부: 기본 지식

게임 업계에서 사운드 보관은 상당히 민감한 부분입니다. 데모씬과 레트로 게임에 각별한 애정이 있든, 혹은 최신 도구와 엔진으로 작업하는 사운드 전문가이든 (아니면 옛날 사운드에 푹...

9.9.2021 - 작성자: 파니 러비야르 (Fanny REBILLARD)

‘잇 테이크 투(It Takes Two)’ 사운드 비하인드 스토리 | Hazelight 오디오 팀과의 Q&A

Hazelight Studios(헤이즈라이트 스튜디오)에서 제작한 잇 테이크 투(It Takes Two)는 분할 스크린 액션 어드벤처 플랫폼 협동 게임입니다. 이 게임은 엄청나게...

5.4.2022 - 작성자: Hazelight (헤이즐라이트)

상호작용 음악: '여러분이 직접 선택하는 모험' 스타일의 발라드

2018년 크라우드 펀딩 캠페인을 성공적으로 마친 inXile Entertainment(인엑사일 엔터테인먼트)는 '웨이스트 랜드 3(Wasteland 3)' 게임의 본격적인 제작에...

23.5.2023 - 작성자: Alexander Brandon (알렉산더 브랜드)

무료 Wwise 인디 라이선스 | 최상의 오디오로 인디 개발자에게 힘을 실어줍니다

프로젝트의 비전에 맞는 몰입형 오디오 경험을 만드는 것은 특히 예산이 제한된 인디 개발자에게는 어려울 수 있습니다. 바로 이를 위해 Audiokinetic의 Wwise는 인디...

18.7.2024 - 작성자: Audiokinetic (오디오키네틱)

다른 글

AI를 활용한 Pagan Online의 다이얼로그(대화) 관리 개선

오디오 프로그래밍 / 게임 오디오 / 사운드 디자인 니콜라 루키치 (NIKOLA LUKIĆ) 우리는 인공지능의 연구개발이 상당한 추진력을 얻고 있는 시대에 살고 있습니다....

Wwise Unity 커닝 페이퍼

Wwise Unity 통합에 대해 말해봅시다. 언제든지 참조할 수 있는 수년간 제작된 교육 자료가 꽤나 많습니다. Audiokinetic 교육 자료로 말하자면 Youtube에도...

게임 사운드 보관 | 제 1부: 기본 지식

게임 업계에서 사운드 보관은 상당히 민감한 부분입니다. 데모씬과 레트로 게임에 각별한 애정이 있든, 혹은 최신 도구와 엔진으로 작업하는 사운드 전문가이든 (아니면 옛날 사운드에 푹...