Audio2Face blend between Audios

Hello everyone,
In my project, I use A2F headless mode with the REST API and ElevenLabs to convert text from an LLM into speech. I then use this audio to animate my MetaHuman in Unreal Engine with LiveLink. Each sentence of the text response is converted into a single audio file. While this process works fairly well, the animation between each sentence looks off. Is there a way to blend the animations of two audio files to make the transitions look more natural?
Here is a video that shows my problem more clearly.

Thanks in advance.

I faced the same issue before, and solved it with gRPC API which allows to stream audio to A2F. Between each sentences, the gap is filled with silence PCM hence the character still animates.

1 Like

Thank you for your help!! I’ll try that. Do you have an example of how to do it that you can share? Do you split your WAV Audio file in chunks and then stream it?