I’m trying to stream voice to a2f, I called the method in the example (push_audio_track_streaming) and it works. The problem is that when I divide a long text into sentences for transmission, when the player receives the second piece of speech, it will immediately play the received sound and interrupt the currently playing sound. I tried setting block to True and it barely worked, but there was a noticeable gap between the audio. And block will block other parts of the asynchronous program (such as llm).
Is there a way to have subsequent sounds stored in cache and then played sequentially? Is it because the default gRPC server doesn’t support this?
In addition, there is another question, how to use the live link plug-in in UE to connect to other computers in the LAN? For example, if a2f is running on another computer in the LAN, how to fill in the port in the plug-in?
Thanks for any help.