Hi I am trying to setup NVIDIA ACE to create animation videos out of Audio2Face.
As per the documentation given under Audio2Face — ACE documentation,
I am able to setup a local docker instance of the Audio2Cluster and while running the sample application for connecting to A2F Controller (Sample application connecting to A2F Controller — ACE documentation), I received the timestamped blendshape csvs.
How can I use this csvs to create Animation(videos along with audio) programatically(possibly with python)?
In essence how do I convert this blendshape information into an actual video? (To begin with I want to use the default character available under A2F)