menu
 

在 Audiokinetic 社区问答论坛上,用户可对 Wwise 和 Strata 相关问题进行提问和解答。如需从 Audiokinetic 技术支持团队获取答复,请务必使用技术支持申请单页面。

0 投票

Hello!

We're currently considering the switch to Wwise in our Unity project, but I have a few concerns that I can't find answers to.

  1. Converting our dialogue system to Wwise events would be cumbersome: I'd rather not have to adapt our already established AssetBundle-based dialogue system to work with Wwise events instead of AudioClips (our game has ~15,000 lines of dialogue). Is there some sort of AkSoundEngine.PlaySound(clip)? Or does absolutely everything have to go through sound banks and events? Is there an easy way to convert every single dialogue clip to an event, or will our sound designers need to go through an extra, very tedious step every time dialogue is added or changed?
  2. Lip sync: Currently we are planning for different tiers of lip sync for different parts of the game:
    1. Hand-animated: Shouldn't be a problem, as the animators can animate to the sound itself.
    2. Custom phoneme metadata: Designers or animators will have a way to arrange phonemes on a timeline. Also shouldn't be a problem, but is there a way to acquire a waveform for a specific Wwise sound to make this easier?
    3. Automatic: Simple mouth movements based on the waveform of the audio clip. We absolutely need to grab the waveform for this, at least so we can bake the mouth movements.

Thanks,

Bryant

分类:General Discussion | 用户: Bryant C. (150 分)

1个回答

0 投票
Hi Bryant,

Did you find a solution to the first point about playing arbitrary clips? I have the same challenge, and would like to play either an audio clip or optimally a specific audio object from a bank without having an event for each line of dialogue.

Simon
用户: Simon A. (140 分)
...