Audiokinetic's Community Q&A is the forum where users can ask and answer questions within the Wwise and Strata communities. If you would like to get an answer from Audiokinetic's Technical support team, make sure you use the Support Tickets page.

0 votes
Maybe there's already a way around this, but an issue i've been consistently coming up against is the need to separate player driven sounds from AI driven sounds in order to send them to different busses so the player sounds can take priority over the AI ones in the mix via sidechaining. The only way we've found that allows us to do this is by duplicating all the assets so they can be routed to different busses and perhaps give them different attenuation curves. This is a NIGHTMARE, especially when it comes to groups of sounds with masses of elements to them, such as footsteps or vehicles, all for the sake of being able to bus them differently. Having to constantly manage both hierarchies is very messy. A trick we tried was using split aux sends to Player and AI busses, and sending the dry main out to a muted bus so you only hear the aux signals (btw pre-fader sending would also be useful), but it's our understanding that there are a limited number of auxes available to use at runtime and this could potentially interfere with reverbs and such.

Using footsteps as an example, there's no real need for the player and enemy footsteps to use different sounding assets. It would be really useful if, similarly to linking/unlinking per platform, there was a way you could create multiple versions of settings for groups of sound objects. For example, you can have one version route to the player bus with its own attenuation curve, then another version route to the AI bus with a different attenuation and maybe even save some resources by reducing the variation count of each footstep for the AI etc. The possibilities are endless, really.
in Feature Requests by Jonathan V. (100 points)

Please sign-in or register to answer this question.

...