Hi, is currently any solution to load a text, TTS, generate a wav and play the metahuman reading the text with lipsync in real time in Unreal?
What would be the pipeline?
What are closest solutions as of today?
Hi, is currently any solution to load a text, TTS, generate a wav and play the metahuman reading the text with lipsync in real time in Unreal?
What would be the pipeline?
What are closest solutions as of today?
Currently the audio input only supports pre-recorded files but in future Nvidia plans to support live audio source input from microphones.
Hi @xabierolaz, welcome to Audio2Face.
We will be releasing a new version very soon which supports TTS streaming into Audio2Face.
Regarding the real-time animation play on metahuman, we don’t support it yet. The closest solution is exporting the blendshape animation (using blendshape conversion) and load it into a Metahuman in Unreal.
Hi, any news on this topic?
Thanks
We’re about to ship a new version now. :) Aiming the end of this week.
Hi, is this version released, don’t see any updates. Latest available build is 2021.3.1
Thanks yseol!
I am really happy to get AudioStreaming feature out of a2f. But I wonder if I can get the output of a2f in a form of any API, so that I can use it in my python program. I saw somewhere a2f will be a part of NVIDIA Maxine SDK. I would like to know if it is the case.