Can Audio2Face Headless REST API be used for commercial use

Hi,

I am confused wether, I can host and use A2F Rest API on a cloud provider (like AWS) to generate blendshapes to animate an avatar (in external 3d engine) and stream it to users.
That may requiers spinning off many instances (gpus) in the cloud to accommodate the demand.

@WendyGram would you mind helping on this

Thanks in advance.

Hi @bahaa. Welcome to the forums! Please consider join our community on Discord too: NVIDIA Omniverse

I’ve moved this question to the A2F forum where someone will be able to jump in an help answer this one.

To answer the the question in the title, yes, Audio2Face is Free to use for any kind of project.

Here’s a good tutorial on Rest API: Audio2Face Headless and RestAPI Overview - YouTube

I’m not quite sure what you mean by it requiring many GPUs in the cloud. @RogerBR might be able to help

@mati-nvidia Thank you for your support - I did joined the Discord.
@Ehsan.HM Great to know that.

In NVIDIA OMNIVERSE LICENSE AGREEMENT, 2.3, it is written: “(b) use of Batch by an individual is limited to two GPUs.” I am confused about this. As I may end creating a A2F instance for each active end user session.

Thanks

Hi @bahaa

Right now the Audio2face app does not support batch processing ( simultaneously) different files. Howeverm you can launch several instances of A2F on a single machine sharing a same GPU as long as there is enough memory left.

It is possible to stream a single animation to multiple users.

However, A2F is mostly an GUI app and the restAPI is mostly to control the app remotely ( ie. you want to render several files on a server and automatize the process by script)

Eventually for controlling multiple avatars or providing an online service the Avatar Cloud Engine will be the right solution when it is released, you can learn more here:

Generative AI Sparks Life into Virtual Characters with NVIDIA ACE for Games | NVIDIA Technical Blog

Hi @RogerBR - Thank you for your support and guidance.

Although, we applied to ACE, we could not get an invite for beta access and we do not know the release time. Seeing the nice results of A2F, I guess I have to create an instance for each user session on strong servers and use headless api (That is why I asked about Licenses).

Another option would be to create the animations for all corpus (chatbot responses) by A2F and cache them somewhere on the sever. Streaming same animation to multiple users is not a fit for our use case where each user will get a personliazed experience.

We need to ship the product as soon as possible. I would appreciate your advices if there is a better way of doing this or if we can test ACE for our use case.

Many thanks

@wtelford1 might be able to help you with your question about licenses and testing ACE

in the meantime, @bahaa, using multiple instances A2F or caching the animations as you mentioned will work.

There is a waiting list for ACE at the moment as we got an overwhelming amount of interest. Stay tuned for updates and contact from the ACE team.

Hi Roger,
for restAPI, is there a way to assign port manually? I can’t find instructions in this document: Rest API — Omniverse Audio2Face latest documentation (nvidia.com)
many thanks!

@bill_ye Not right now, we did not expose that setting. We use the default value for service

Although it can be set by adding this line on the app .kit file [settings.exts."omni.services.transport.server.http"] port = 8021

Or launching the .bat from terminal by calling .\audio2face.bat --/exts/omni.services.transport.server.http/port=8021

1 Like