Dehumaniser Live: An innovative technology focused on voice processing

커뮤니티와 이벤트 / 사운드 디자인

In the world of games, there are many characters that are completely fictitious. Characters like enormous dragons, gigantic soldiers, werewolves, zombies, and cyborgs each come with their own personality, battling the player or, at times, befriending them as the story progresses. It is the character's voice that adds personality to the virtual being, giving it life and thus playing an extremely critical role in providing the best gaming experience. However, it isn't easy to create voices for characters that don't actually exist. If it is a multilingual game, the workload can begin to multiply. KROTOS Dehumaniser Live is an innovative technology specializing in vocal effects, and works as a runtime plug-in for Wwise. The Dehumaniser Live vocal processing functions enable high-quality, creative voices making it, for example, possible to morph a werewolf's voice into a human voice, in real-time.

dehumaniser2-compressed-588x312.jpg

Krotos Ltd is one of Audiokinetic's community plug-in partners, which offers advanced audio technology. KROTOS Dehumaniser Live is comprised of four different components:

  • Dehumaniser Simple Mode
  • Dehumaniser Advanced Mode
  • Vocoder
  • Mic Input

There are two modes in the main component, Dehumaniser Live:

Dehumaniser Simple Mode

  • Age
  • Aggressiveness
  • Size
  • Character
  • Wildness

Figure_01.jpg

Dehumaniser Advanced Mode

  • Granular
  • Delay Pitch Shifting (x 2)
  • Flanger/Chorus
  • Ring Modulator

Figure_02.jpg

In Simple Mode, the possibilities are concentrated into five parameters, so morphing a voice becomes a simple operation. It also saves CPU use. Meanwhile, Advanced Mode offers more than 20 parameters for you to tweak. It can, however, increase the CPU workload. Both provide very distinct effects, so you can decide which of the two modes is best for the situation on hand. All parameters can be processed in real time using RTPC. Configure an RTPC for the parameter that changes a human character into a monster, and the character can morph smoothly in-game, too. You don't need different assets for the original voice and the processed voice, so you can also save on memory. 

Figure_03.jpg

 

In Dehumaniser Simple Mode, the Character and Size parameters are crucial, and the effects are obvious. It is a good idea to use these two parameters to give foundation to the character through size and texture, and adjust the other parameters to get the result you desire.

In Dehumaniser Advanced Mode, the two Delay Pitch Shift parameters are important. These set the base pitch, and you can configure the Granular parameter for added layers of creepy textures. You can then use the various effects included in the package and fine-tune the final voice effects. Vocoder produces a mechanical voice, like a robot's. If you select Pitch Tracker Mode, it will track the input signal's pitch variation, so I recommend you try it out first. Set Pitch Mode gives you a typical robotic voice effect. You can use the Career parameter's ten waveforms and 8-band EQ to enhance the robot voice texture.

Figure_04.jpg

Finally, Mic Input is a component that connects the plug-in to your computer's mic input, allowing you to play back your own voice in Wwise, in real time. That means you can use your voice to test the Dehumaniser Live functions. Of course, if you pair Dehumaniser Live with your game's voice chat system, it can act as a run-time component that processes the user's voice in real time.

Figure_05.jpg 

Audiokinetic held a demonstration at CEDEC 2017 in Japan this past August, showcasing how Dehumaniser Live can be combined with dialogue localization features in Wwise to obtain an innovative workflow, attracting interest both at the Audiokinetic booth and in our sponsored lecture. The Audiokinetic sponsored session at CEDEC 2017 focused on how to use the Wwise dialogue workflow. We invited Adam Levenson and Matthew Collings from Krotos, the developer of Dehumaniser Live, as well as dialogue recording specialist Tom Hays, from RocketSound, to talk about the new possibilities of dialogue production using Wwise. Masato Ushijima, Audiokinetic in-house product expert, also introduced various dialogue functions from Wwise.

Picture1.png

Dialogue is one of the areas of game audio development that requires a large amount of work in recent years. Reducing the number of steps involved with dialogue means increasing your time being creative with effects and music. Many game developers rely on spreadsheets to manage dialogue, but Wwise allows you to import voice data directly from the database. You can perform batch imports that contain the Event and property settings, so integrating dialogue should be very quick and reliable. 

Picture2.png

 Picture3.png

After you've integrated your dialogue assets into Wwise, it's time for Dehumaniser Live. Matthew Collings presented the basic features of Dehumaniser Live and demonstrated some presets. He gave us monster voices and robot voices, and just watching him play with a string of presets gave us an endless amount of ideas. In Wwise, you can track performance, tweak parameters, and even link with RTPCs to enable interactivity at runtime. Dehumaniser Live works as a plug-in to Wwise, so there is no need to process the voice data recorded on the DAW to add effects and then output the sound again.

Wwise supports multilingual games. If you import voice data as Sound Voice objects, you can change them according to the Language setting. If you import the same dialogue line in different languages and use an identical filename for them all, Wwise will automatically look at the filenames and store them in the same object. As long as the filenames match, all you need is to drag and drop them into Wwise for implementation. No room for manual errors here. If you create a monster voice with Dehumaniser Live and switch to another language, you can apply the exact same settings to a different language. You no longer need to take the extra step of going through each language and repeating the effect settings.

Picture4.png

Tom Hays talked about some of the issues associated with the conventional approach to dialogue production, from an outsourcing provider's point of view. The main point he raised was that he may have absolutely no idea how the post-recording voice files would be used in-game. With Wwise, he can use Work Units to take advantage of version management systems like Perforce to deliver data for implementation, so, as an outsource vendor, he is able to control the quality of what he delivers. If the developer provides him with the project's game build, he can play back the dialogue in-game and test the results, and he stressed that Wwise dialogue features allow him to significantly improve both efficiency and quality.

The final part of the presentation focused on Vorbis-related improvements in Wwise. Even social networking games can now carry up to 10,000 voice files, and while game consoles perform drastically better than before, it is still important to keep those sound data sizes compressed to the minimum. Audiokinetic has made proprietary enhancements to Wwise Vorbis. Depending on the waveform data and settings, there is more than 20% optimization compared to already-optimized previous versions of Wwise. The dialogue workflow has been improved, and performance can be maximized.

Picture5.png

 

 

 

 

 

 

 

 

 

 

Masato Ushijima

Product Expert

Masato Ushijima

Product Expert

Masato Ushijima studied acoustics and music theory in the department of Music Synthesis, at the Berklee College of Music. He returned to Japan and started his career in audio, working for three years on the WWE games in sound design, direction, requirement specifications, and interpreting. In 2015, Masato founded Sonologic-Design (www.sonologic-design.com), specializing in overall audio support including sound design, direction, and requirement specifications, mainly in the gaming industry. He works in game audio as well as gaming machines, animation, commercials, promotional videos, and other applications, offering a wide range of expertise in multi-audio, music production, and voice direction. In March 2017, he was appointed the Product Expert of Audiokinetic K.K.

댓글

Félix Tiévant

November 23, 2017 at 03:38 am

Really impressive! Can this plugin be used at bus level? Also, small typo on the advanced tab: FreqUency!

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

Wwise를 사용하여 보다 복잡한 MIDI로 제어되는 샘플 악기만들기

MIDI 기반 음악으로 되돌아가는 데에 관심이 있는 사람들은, PS4의 내장 신디사이저를 이용할 수 없으며(존재하지 않음) PC 사운드카드의 GENERAL MIDI에 의존할 수도...

28.7.2020 - 작성자: 다니엘 벡 (DANIEL BECK)

Wwise Wworld Wwide Online Expo (와이즈 월드 와이드 온라인 엑스포) 요약해보기

Audiokinetic의 20주년 기념과 딱 맞게 2020년이 시작되었습니다. 오늘, 크로스 플랫폼 오디오 솔루션인 Wwise는 여러분의 성원에 힘입어 상호작용 및 게임 오디오에서...

30.9.2020 - 작성자: Audiokinetic

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 1부: 환경음과 보이스오버

'텔 미 와이(Tell Me Why)'는 DONTNOD(돈노드)가 개발하고 Xbox Games Studios(엑스박스 게임 스튜디오)가 출판한 싱글 플레이어 서사적 어드벤처...

4.5.2022 - 작성자: 루이 마르탱 (Louis Martin)

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 3부: 사운드 디자인

텔 미 와이(Tell Me Why)의 오디오 팀에게는 독특하면서도 인상 깊은 서사적 시퀀스를 향상시킬 수 있는 기회가 많았습니다. 저희 시네마틱 및 크리에이티브 디렉터는 영상에 꼭...

19.7.2022 - 작성자: 매튜 피오렌티니 (Mathieu Fiorentini)

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 4부: 믹싱과 마스터링

'텔 미 와이(Tell me Why)'는 Xbox와 PC에서 출시되었으며 5.1 서라운드 사운드를 완전히 지원합니다. 서사적 게임을 작업할 때에는 최종 믹스를 특정 방식으로...

26.7.2022 - 작성자: Mathieu Fiorentini

Strata 작업 과정 파워업하기 | 제 1부 - “Open In Strata”

Strata 라이브러리의 렌더링된 파일을 REAPER의 Media Explorer에서 들어보고 원하는 것과 비슷한 사운드를 찾았지만 좀 더 필요에 맞게 수정하고 싶다고 해봅시다....

2.12.2022 - 작성자: 앤드류 코스타 (Andrew Costa)

다른 글

Wwise를 사용하여 보다 복잡한 MIDI로 제어되는 샘플 악기만들기

MIDI 기반 음악으로 되돌아가는 데에 관심이 있는 사람들은, PS4의 내장 신디사이저를 이용할 수 없으며(존재하지 않음) PC 사운드카드의 GENERAL MIDI에 의존할 수도...

Wwise Wworld Wwide Online Expo (와이즈 월드 와이드 온라인 엑스포) 요약해보기

Audiokinetic의 20주년 기념과 딱 맞게 2020년이 시작되었습니다. 오늘, 크로스 플랫폼 오디오 솔루션인 Wwise는 여러분의 성원에 힘입어 상호작용 및 게임 오디오에서...

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 1부: 환경음과 보이스오버

'텔 미 와이(Tell Me Why)'는 DONTNOD(돈노드)가 개발하고 Xbox Games Studios(엑스박스 게임 스튜디오)가 출판한 싱글 플레이어 서사적 어드벤처...