Audiokinetic's Community Q&A is the forum where users can ask and answer questions within the Wwise and Strata communities. If you would like to get an answer from Audiokinetic's Technical support team, make sure you use the Support Tickets page.

0 votes

Hello!

We're currently considering the switch to Wwise in our Unity project, but I have a few concerns that I can't find answers to.

  1. Converting our dialogue system to Wwise events would be cumbersome: I'd rather not have to adapt our already established AssetBundle-based dialogue system to work with Wwise events instead of AudioClips (our game has ~15,000 lines of dialogue). Is there some sort of AkSoundEngine.PlaySound(clip)? Or does absolutely everything have to go through sound banks and events? Is there an easy way to convert every single dialogue clip to an event, or will our sound designers need to go through an extra, very tedious step every time dialogue is added or changed?
  2. Lip sync: Currently we are planning for different tiers of lip sync for different parts of the game:
    1. Hand-animated: Shouldn't be a problem, as the animators can animate to the sound itself.
    2. Custom phoneme metadata: Designers or animators will have a way to arrange phonemes on a timeline. Also shouldn't be a problem, but is there a way to acquire a waveform for a specific Wwise sound to make this easier?
    3. Automatic: Simple mouth movements based on the waveform of the audio clip. We absolutely need to grab the waveform for this, at least so we can bake the mouth movements.

Thanks,

Bryant

in General Discussion by Bryant C. (150 points)

1 Answer

0 votes
Hi Bryant,

Did you find a solution to the first point about playing arbitrary clips? I have the same challenge, and would like to play either an audio clip or optimally a specific audio object from a bank without having an event for each line of dialogue.

Simon
by Simon A. (140 points)
...