Problem about streaming and live-link

Hello, guys.
I’m trying to stream voice to a2f, I called the method in the example (push_audio_track_streaming) and it works. The problem is that when I divide a long text into sentences for transmission, when the player receives the second piece of speech, it will immediately play the received sound and interrupt the currently playing sound. I tried setting block to True and it barely worked, but there was a noticeable gap between the audio. And block will block other parts of the asynchronous program (such as llm).
Is there a way to have subsequent sounds stored in cache and then played sequentially? Is it because the default gRPC server doesn’t support this?

In addition, there is another question, how to use the live link plug-in in UE to connect to other computers in the LAN? For example, if a2f is running on another computer in the LAN, how to fill in the port in the plug-in?

Thanks for any help.

1 Like

I solved the streaming problem. Send a parameter to the gRPC server through the client in the form of metadata to control whether the current round of audio needs to skip the on_start callback, such as a parameter similar to “is_first_sentence”. So it only creates new audio tracks when needed, rather than creating a separate audio track for each sentence.
But there are still some problems with this: there is an obvious popping sound between the spliced sounds, and I don’t know how to eliminate it yet.


Hello, do you have specific steps?

I have the same popping sound problem. Do you solve it?

Does it get solved if you focus on an app other than Audio2Face, e.g. click on the UE window?