it was working for me few days back, but now i cannot play video on the nvidia-on-demand.
Getting this error. “There was an error loading the video"
Hi there and thanks for posting. I have to be honest here and say we probably pulled those videos a) because they are so old and out of date and b) because the app itself, Audio2Face is gone. It is depreciated. You are still using it? Through Launcher? Yes, I am sorry but it has been depreciated now. The new method is to use our Audio2Face NIM and the Maya plugin.
Let me check this for sure and get back to you.
okay, thanks the quick response. I can check the Maya plugin as you mentioned.
I wanted to use Unreal engine, but realized that Metahuman for linux is not supported by unreal as per: UE 5.6 Metahuman Character Creator Linux support - #11 by Erbavor - Feedback & Requests - Epic Developer Community Forums
That is when i thought of just using Omniverse launcher to learn and start with.
But my use case is, i have linux/ubuntu and i also have my own TTS/STT model which i run using python scripts. I want to integrate that with 3D avatar model for lip sync and thought of using audio2face, by connecting output of my python scripts to 3D avatar/omniverse/unreal/maya using the http/TCP protocol.
Would Maya be the best option for this setup ? Please advice, thank you! :)
Not on Linux no. Nothing on Linux unless you use our new A2F containers and microservices. Sometimes it is just easier to change your OS setup than your software preference. If Unreal and Maya is the way to go, go with windows :-)
Ok got it. I can switch to Windows as last resort.
I do have the A2F docker setup on linux and was able to successfully run it and generate the output folder with blendshape data. Do you think I could somehow use that output with Maya or any other Nvidia supported 3D avatar for Linux/Ubuntu ?
If you have the BlendShape data already then yes, just apply that to your character in USD Composer or with Maya.