I am confused wether, I can host and use A2F Rest API on a cloud provider (like AWS) to generate blendshapes to animate an avatar (in external 3d engine) and stream it to users.
That may requiers spinning off many instances (gpus) in the cloud to accommodate the demand.
@mati-nvidia Thank you for your support - I did joined the Discord. @Ehsan.HM Great to know that.
In NVIDIA OMNIVERSE LICENSE AGREEMENT, 2.3, it is written: “(b) use of Batch by an individual is limited to two GPUs.” I am confused about this. As I may end creating a A2F instance for each active end user session.
Right now the Audio2face app does not support batch processing ( simultaneously) different files. Howeverm you can launch several instances of A2F on a single machine sharing a same GPU as long as there is enough memory left.
It is possible to stream a single animation to multiple users.
However, A2F is mostly an GUI app and the restAPI is mostly to control the app remotely ( ie. you want to render several files on a server and automatize the process by script)
Eventually for controlling multiple avatars or providing an online service the Avatar Cloud Engine will be the right solution when it is released, you can learn more here:
Hi @RogerBR - Thank you for your support and guidance.
Although, we applied to ACE, we could not get an invite for beta access and we do not know the release time. Seeing the nice results of A2F, I guess I have to create an instance for each user session on strong servers and use headless api (That is why I asked about Licenses).
Another option would be to create the animations for all corpus (chatbot responses) by A2F and cache them somewhere on the sever. Streaming same animation to multiple users is not a fit for our use case where each user will get a personliazed experience.
We need to ship the product as soon as possible. I would appreciate your advices if there is a better way of doing this or if we can test ACE for our use case.