Generating the Animation Video from Audio2Face Controller Output | NVIDIA ACE

Hi I am trying to setup NVIDIA ACE to create animation videos out of Audio2Face.

As per the documentation given under Audio2Face — ACE documentation,
I am able to setup a local docker instance of the Audio2Cluster and while running the sample application for connecting to A2F Controller (Sample application connecting to A2F Controller — ACE documentation), I received the timestamped blendshape csvs.

How can I use this csvs to create Animation(videos along with audio) programatically(possibly with python)?

In essence how do I convert this blendshape information into an actual video? (To begin with I want to use the default character available under A2F)

inception Audio2Face (closed)

Hello @koushik3

We’ve Moved NVIDIA A2F & ACE Support! We have new places to better help you get the support you need.

  1. Developers can submit tickets through the NVIDIA AI Enterprise program NVIDIA Enterprise Customer Support
  2. Developers can discuss A2F & ACE through our NVIDIA Developer Discord Server NVIDIA Developer