Can Audio2face do live stream to metahuman?

I found only option about live stream to audio player. I want to stream TTS audio to Audio2face and apply facial expression to metahuman in realtime. Can Audio2face do that ?

Hello @heury! Realtime conversion is not currently supported, however it is something that we are working on. If you haven’t already, check out our video here: Audio2Face to Metahuman | NVIDIA On-Demand

Thanks for your response. If realtime conversion is not supported, can I convert wav to usd using python without gui app?

Hi, currently we only support the function via GUI, sorry about that.
Can I ask what is your scenario of running things via just Python to convert .wav to .usd? any reason to avoid the app & gui during the process?

I want to use metahuman as AI assistant.
So, I need the function that convert tts wav to metahuman in realtime and automatically.
Do you have any idea to support it with Audio2face?

I need the function that convert tts wav to metahuman in realtime

Yes, realtime live link to the Metahuman in UE is on our plan. At the moment, we cannot confirm the release schedule of the function yet, please stay tuned.
If you need this very soon, I recommend to read the source codes and try making one by yourself. You will be able to send/receive data using network protocols. Some users already implemented such function by themselves. Thanks.

Did you release source code of Audio2face? If then, where can I download or refer to it?

Python part of the A2F is accessible via your install directory. You can check the python extensions there and do whatever you want on top of the current implementation. :-)

I found the directory, but I have no idea how to implement the function.
There are several tests in the directory and I investigate following files.

exts\omni.audio2face.core\omni\audio2face\core\tests\test_a2f_core.py
exts\omni.audio2face.tool\omni\audio2face\tool\tests\test_a2f_usd_exporter.py

I want to implement grpc server which take wave file and stream it to usd format.
and I will connect it with metahuman.
But, I can’t find any solution for that with above files.
Would you suggest any example which I can refer?

Hi heury,

I am trying to animate a metahuman via a2f in real time, too. Have you been successful implementing this extension?
@NVIDIA: can you already tell when this update will be released?

1 Like

Hi, this will definetly be super useful, any update on the progress ?

1 Like

Hi, there is no progress in the feature unfortunately. We plan to work on this item within this year, but cannot confirm the exact date yet. Thanks for the interest!

1 Like

Bump. Any update on this feature? @yseol which users have implemented such a function themselves?

Hello @feel.or.fake! Here is a link to the latest releases to the Audio2Face app: Release Notes — Omniverse Audio2Face documentation

Stay tuned for some update announcements “very” soon. Cheers.

Hey @siyuen, any update on this?

We announced ACE (Omniverse Avatar Cloud Engine) at Siggraph. This will allow live stream of A2F from cloud services to UE MH.

We did a live recorded demo of that, check it out below. And if you missed the full Digital Human section of our special address, check out the link at the bottom too.

Release date wise, we will have more updates on that at GTC in mid Sept. And our goal is as soon as we can. So stay tuned.

ACE

NVIDIA Siggraph Special Address - Digital Human

Will Omniverse ACE be affordable or even available to hobbyists? My understanding was that it’s a business solution, but maybe I misunderstood.