I am attempting to play a .mp4 file in our project, which is using Wwise for its audio. I have already loaded the .mp4 and have access to the decoded audio and video streams.
The decoder provides the audio data in 16 bit interleaved int PCM format as well as information regarding the sample.
I believe that it should be possible to use the Wwise Audio Input plugin to have Wwise play this audio data.
I have rated my sound input plugin and soundbank (
https://www.audiokinetic.com/library/edge/?source=Help&id=wwise_audio_input_plug_in) and have registered my plugin and callback functions (
https://www.audiokinetic.com/library/edge/?source=SDK&id=referencematerial__audioinput.html) which are being called after my initial post Event.
I am not sure however what must be done inside of my ExecuteCallback function to correctly fill the AkAudioBuffer io_pBufferOut object for playback. Currently I attempt to call io_pbufferOut->AttachInterleavedData which results in no sound output and errors of "AK Error: Source starvation:" being printed. I believe that the starvation errors indicate that data is being requested faster than the decoder can provide them.
I have looked at the microphone example provided, but am not sure that it is the same scenario as I do not build a buffer form input device I am presented the buffer directly from the decoder.
Can anybody show an example of how to copy an in memory buffer to the io_pbufferOut and have it played by Wwise?
**edit**
The cause of the source starvation and silence was incorrectly setting available frames lower than the amount of data provided. I am not able to play and have sound output, however the sound is not correct at all. It sounds as though it sounds very choppy and does not seem to have the correct tone.