Unreal Engine Audio echo

Hello!
In my project, I use A2F headless mode with the REST API and use Elevenlabs to convert text into speech. I then use this audio to animate my MetaHuman in Unreal with LiveLink. My problem is, that I can hear the audio twice: the normal audio that I get from ElevenLabs, but also a weird robotic voice, which is a little bit delayed. When I close LiveLink, I only hear the original ElevenLabs voice. It also seems that the MetaHuman face animation is defined by the original voice, not the delayed one.
Is there a solution for this?
Thanks in advance.

Does it help if you turn off the Audio2Face volume?

No, then I only can hear the robotic voice in UE, which is not in sync with the mouth movement. However, when I turn off the UE Volume, the sound is correct and the mouth movement is in sync with the A2F audio. Any idea where this second voice comes from in UE? And why it sounds very robotic/different?

That audio in Unreal Engine is probably streamed from Audio2Face. You can disable it using the Enable Audio Stream on the StreamLiveLink node.

Additionally, I think if the sample rate of the audio and the settings in Unreal Engine match, it should sound correctly (not robotic)

Its an issue in the set up on UE side , not on the A2F. The Eleven labs streams audio at different frequencies What audio formats do you support? – ElevenLabs. If there is a mismatch between the frequency streamed and the frequency selected in LiveLink plugin in UE we end up hearing robotic / delayed voice. essentially UE will be playing back at a lesser frequency than streamed. Usually selecting 44.1 Khz on works, depending on the project. Can check the link above and set up the live link plugin frequency correctly it works. Hope it helps.