menu
 

AudiokineticのコミュニティQ&AはWwiseやStrataのコミュニティ内でユーザ同士が質問・回答をし合うことができるフォーラムです。Audiokineticテクニカルサポートチームからの回答をご希望の場合は、必ず サポートチケットページ をご利用ください。

+1 支持
Hi there,

I'm looking for a bit of advice regarding lip sync options using Wwise. We're looking to integrate a degree of lip sync into the project I'm currently working on, more general mouth movements that match the speech pattern as opposed to proper sync, and was wondering if anyone had any experience in using dialogue or audio from Wwise to drive animation movements like that? Like, for example, measuring the volume output or transients of an audio event to control how/when the mouth opens. I know markers can be used to trigger other events, but as the lip sync is not currently using bespoke animations per line I'm not entirely sure if that would work here. Similarly, setting up markers outside of the "Insert Filename Marker" in the Wwise conversion settings is currently a no go due to the sheer volume of work it would require and time restraints with regards to other audio tasks.

Also, we're using Unity if anyone has any info specific to that.

Cheers!
Jaime C. (110 ポイント) General Discussion

回答 1

0 支持
Using SoundForge  to add markers that will save to audio flle's metadata. Then PostEvent and Callback by marker.

Hope it helps you.
岳豪 (140 ポイント)
...