TTS lipsync through Audio2face to Metahuman in realtime

Hi, is currently any solution to load a text, TTS, generate a wav and play the metahuman reading the text with lipsync in real time in Unreal?

What would be the pipeline?

What are closest solutions as of today?

Currently the audio input only supports pre-recorded files but in future Nvidia plans to support live audio source input from microphones.

Hi @xabierolaz, welcome to Audio2Face.
We will be releasing a new version very soon which supports TTS streaming into Audio2Face.
Regarding the real-time animation play on metahuman, we don’t support it yet. The closest solution is exporting the blendshape animation (using blendshape conversion) and load it into a Metahuman in Unreal.