Practical Implementation: Leveraging Wwise to Replicate a Real Radio in Saints Row (2022)

게임 오디오

When initially designing systems in pre-production for Saints Row (2022), the audio team decided to let the real world dictate how a number of our systems would operate. This was mainly decided simply because of the structure of the team at large—we have practical developers applying real-world skills in our proprietary engine. Our physics programmer used to make flight simulators, our vehicle designer was an engineer, etc… so they would build their systems around what they knew. We decided to leverage this; all those systems already worked in a practical way, let’s just stick with what works.

For example, our vehicle engine is set up just like a real vehicle; we get an actual RPM value, we have gear ratios, throttle, suspension, and even tire slip to consider, not just ignition/shutdown with a velocity RTPC (though we did have those available too). We even have an RTPC for whether a boat’s propeller is submerged or not.

On top of that, vehicle collisions are registered on each vehicle depending on the type of collision. If I’m driving full-on and slam into an NPC vehicle perpendicular to my vehicle, my car plays a head-on collision, and the NPC plays a T-bone sound. With the limited number of assets we used, we were able to get the permutation on general vehicle collision sounds into the tens of millions before even counting the random vehicle parts that could fall off or spray fluid as a sweetener layer.

We applied the same theory to prop impacts; if a large hollow wooden barrel hits a metal pole, it sounds exactly as described. In addition, one of my favorite features (which I’m sure nobody consciously noticed) is that bullet impacts and explosions travel at the speed of sound using the Initial Delay feature.

img1

This theory was held across the entire project, with every audio system considering any real-life practical implementation that we could think of before attempting to get fancy. That’s when things got spicy—it was time to design the radio system.

Saints Row Radio—A Brief History

Designing the radio system was a perfect storm of experience and timing. Before joining Volition, I cut my teeth in the cutthroat rat race of medium market radio, starting in promotions before designing commercials and some engineering before joining the airstaff. I knew how radio worked from tip to toe, and designing the Saints Row radio was the first project for which I had full ownership of audio systems from the ground level. 

Previous SR titles had a complex network of background clocks, timers, and seek tables, with hundreds of table files all talking to each other every frame. Under the hood, it was a spiderweb that would fall apart if the spider sneezed. On the player’s side, there was some cross-pollination between individual car radios playing the same station, some syncing bugs, etc., but it was radio enough to be called radio.

I thought we could do better, considered the world-first approach, cracked my knuckles, and called our audio programmer, Mike Naumov, into my office.

The idea was simple: Radio is radio. There are transmitters in the world and receivers roaming around. All we had to do was connect the two and set up playlist logic that the radio elements could follow.

The Transmitter

Before we could get anyone tuned in, we had to get something playing. We placed an object under the origin of the game world and played a Random Container of old-school country APM tracks just to get started. This would eventually evolve into the Tumbleweed station. 

img2

Making sure we would stay in sync using virtual voice settings was my primary task. In the meantime,  Mike got to work on the radio dial functionality, so we added another object with another container so we could start testing switching between stations.

The Receiver

Now, we just had to figure out how to actually hear the radio when you turn it on. Initially, we tried simply setting an RTPC to turn a station 2D when selected while keeping all the others 3D using the Speaker Panning/3D Spatialization Mix feature. But we were concerned NPC cars would also tune into the same station and double up the music. The solution ended up being as simple as assigning a PC/NPC RTPC to the player-owned receiver to do basically the same thing.

We now had a functional radio component that could be placed on all vehicles, distinguish the player character’s radio from NPC radios, and functionally change between stations. 

Through a proprietary multi-position emitter tool (similar to AK’s Set Multiple Positions node in Unreal), we were able to dynamically attach emitting positions to each radio component without creating new game objects, allowing multiple vehicles to be tuned into the same station while moving, without messing with our voice or object count.

img3

img4

The Stations

Now that we had a way to transmit and a way to listen, it was time to start building the stations. From my experience using Scott Studio back in the day to set up and run real-world radio playlists, I figured we’d just recreate that functionality in the guts of our own radio system.

Here’s a Googled screenshot of Scott Studios SS32 radio automation software interface, circa 2006. 

img5v3

(Source: https://www.devabroadcast.com/dld.php?r=759)

This effectively lets the station programmer predefine a sequence, including commercial breaks, news breaks, DJ talkover, weather, station IDs, all broken down by individual elements, letting the airstaff run the automated playlist or hijack it for timing or requests. 

To achieve this, we created a modular system comprised of the same “elements” system, which could be slotted in a predefined order by the designer so we wouldn’t get too many commercials in a row, or a song with an outro followed immediately by an intro to the next song, or a station ID in between two commercials, etc. This modularity also opened the door for a custom player-created playlist as a bonus.

As you can see in this screenshot from our table editor, there are newscasts sprinkled in between any non-verbal element. This is due to the system playing a newscast in the next available slot if and only if a newscast was unlocked by completing a piece of gameplay. This is because the newscast will discuss the player’s actions. It helped tremendously that Mike’s time was shared with the progression team, making that functionality a breeze to set up.

img6

The Songs

Having planned the entire system around several Interactive Music features so far, this is where we really started to leverage Wwise. As each element was modular, all we needed was a play Event for each one. These are all paired off in a separate table class, so the code made all the selections—each song is simply a Switch Container with each of the flavors nested underneath as a sequence:

img7

img8

When a flavor is selected, a Switch is set for that station’s object and the play Event is posted for that song’s Switch Container. We could then set the Exit cues to time out the transitions, allowing each DJ to hit the post perfectly (i.e., stop talking on a specific beat, a primary goal/flex for all on-air personalities):

img9

That Exit cue would line up to fire off the song so the DJ will always hit the post, indicated by the playhead below:

img10

Similar attention was paid to the outro; when the music starts to wind down, we fire off an Exit cue and let the DJ start to talk. We also used sidechain compression on the DJ voice bus to allow them to punch through the music if it was a little fuller than the jockey.

img11

In addition to the Exit cue, we had to be able to tell the radio system the element was finished, not just the individual track. To do this, we also placed a custom cue with a specific label so that the code could listen for an AK callback; when triggered it tells the system to select and play the next element. This significantly streamlined the engine-side processing and allowed a very natural crossfade between elements, just like a real radio DJ could manually force the next element to play or predetermine a crossfade duration. All elements also had the same custom cue to exit, so the code only had to listen for a single callback each time it cycled. 

In Conclusion…

When all this comes together, it feels like the most realistic radio experience I’ve had to date. Two cars passing while listening to the same station are completely in sync, you can hijack a car playing a song you like, and it immediately strips LPF and volume offsets at the exact same place in the song. You can toggle through all the stations and land back at the initial song as if it kept broadcasting over the airwaves without considering your individual actions…because it actually did.

Without the perfect collision of Wwise’s capabilities, my exact position on the project at the time, my experience working in this exact field in the real world, and especially Mike’s incredible work and willingness to rewrite a paradigm, we’d probably still be running clocks and seek-playing songs at incredible cost to our CPU budget… with the bonus of squashing all the bugs from the old system and making radio as modular as possible for possible fan modding in the future. It was overall an extremely pleasant and satisfying experience that we’re very proud of. 

Brendon Ellis

Senior Technical Audio Designer

Volition

Brendon Ellis

Senior Technical Audio Designer

Volition

Brendon Ellis started testing games in 2007 seeking shelter from the cutthroat rat race of the medium-market radio industry as a line tester, moved into audio testing, then bug fixing, and then a sound designer before eventually becoming Volition’s senior technical audio designer.

Discord: poor-old-goat#8203

MobyGames

 @The_Pie_Maker

댓글

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

AI를 활용한 Pagan Online의 다이얼로그(대화) 관리 개선

오디오 프로그래밍 / 게임 오디오 / 사운드 디자인 니콜라 루키치 (NIKOLA LUKIĆ) 우리는 인공지능의 연구개발이 상당한 추진력을 얻고 있는 시대에 살고 있습니다....

24.3.2020 - 작성자: 니콜라 루키치 (NIKOLA LUKIĆ)

게임 오디오 직업 스킬 - 게임 사운드 디자이너로 고용되는 법

20.1.2021 - 작성자: 브라이언 슈밋(BRIAN SCHMIDT)

Event-Based Packaging(이벤트 기반 패키징)이란?

Event-Based Packaging(이벤트 기반 패키징)이란 무엇일까요? 얼마 전에 Wwise 2019.2 UE4 통합은 Event-Based Packaging(이벤트 기반...

10.8.2021 - 작성자: 판 룬펑 (Fan Runpeng)

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 2부: 음악

Tell Me Why의 음악은 본질적으로 캐릭터의 서사와 감정을 뒷받침하도록 설계되었습니다. 게임의 이야기는 두 주인공에게 아주 자세하게 집중되어 있으며 생각에 잠기기 쉬운 느린...

23.6.2022 - 작성자: 루이 마르탱 (Louis Martin)

Wwise 라이선스 및 가격 책정에 대해

"도대체 어떤 방식으로 책정되는거죠??"전 세계 스튜디오의 어두운 구석 어딘가에서 Wwise의 가격 모델이 조금... 복잡하다는 속삭임들이 있어 왔습니다. 완전히 틀린 말은 아닐...

13.4.2023 - 작성자: 마이크 드러멜스미스 (Mike Drummelsmith)

Wwise를 사용한 반복 재생 기반 자동차 엔진음 디자인 | 제 2부

다시 뵙게 되어 반갑습니다! 이 시리즈에서는 Wwise에서의 간단한 반복 재생 기반 자동차 엔진을 구성 및 설계를 함께 살펴보게 됩니다. 디자인을 제어하기 위해 필요한 엔진 매개...

9.5.2023 - 작성자: 아르토 코이비스토 (Arto Koivisto)

다른 글

AI를 활용한 Pagan Online의 다이얼로그(대화) 관리 개선

오디오 프로그래밍 / 게임 오디오 / 사운드 디자인 니콜라 루키치 (NIKOLA LUKIĆ) 우리는 인공지능의 연구개발이 상당한 추진력을 얻고 있는 시대에 살고 있습니다....

게임 오디오 직업 스킬 - 게임 사운드 디자이너로 고용되는 법

Event-Based Packaging(이벤트 기반 패키징)이란?

Event-Based Packaging(이벤트 기반 패키징)이란 무엇일까요? 얼마 전에 Wwise 2019.2 UE4 통합은 Event-Based Packaging(이벤트 기반...