Hello everyone,
In my project, I use A2F headless mode with the REST API and ElevenLabs to convert text from an LLM into speech. I then use this audio to animate my MetaHuman in Unreal Engine with LiveLink. Each sentence of the text response is converted into a single audio file. While this process works fairly well, the animation between each sentence looks off. Is there a way to blend the animations of two audio files to make the transitions look more natural?
Here is a video that shows my problem more clearly.
Thanks in advance.