menu
 

AudiokineticのコミュニティQ&AはWwiseやStrataのコミュニティ内でユーザ同士が質問・回答をし合うことができるフォーラムです。Audiokineticテクニカルサポートチームからの回答をご希望の場合は、必ず サポートチケットページ をご利用ください。

0 支持

Hello!

We're currently considering the switch to Wwise in our Unity project, but I have a few concerns that I can't find answers to.

  1. Converting our dialogue system to Wwise events would be cumbersome: I'd rather not have to adapt our already established AssetBundle-based dialogue system to work with Wwise events instead of AudioClips (our game has ~15,000 lines of dialogue). Is there some sort of AkSoundEngine.PlaySound(clip)? Or does absolutely everything have to go through sound banks and events? Is there an easy way to convert every single dialogue clip to an event, or will our sound designers need to go through an extra, very tedious step every time dialogue is added or changed?
  2. Lip sync: Currently we are planning for different tiers of lip sync for different parts of the game:
    1. Hand-animated: Shouldn't be a problem, as the animators can animate to the sound itself.
    2. Custom phoneme metadata: Designers or animators will have a way to arrange phonemes on a timeline. Also shouldn't be a problem, but is there a way to acquire a waveform for a specific Wwise sound to make this easier?
    3. Automatic: Simple mouth movements based on the waveform of the audio clip. We absolutely need to grab the waveform for this, at least so we can bake the mouth movements.

Thanks,

Bryant

Bryant C. (150 ポイント) General Discussion

回答 1

0 支持
Hi Bryant,

Did you find a solution to the first point about playing arbitrary clips? I have the same challenge, and would like to play either an audio clip or optimally a specific audio object from a bank without having an event for each line of dialogue.

Simon
Simon A. (140 ポイント)
...