Hey guys, I’m new in Audio2Face tool, and I gotta, say I dont have a gpu with RTX to test most of the things of the omniverse on my own laptop yet. So I don’t understand much of it so far and I’d like to get to know more of it before buying a hardware to use it on my work pipeline. Can you help me with a few questions?
_In order to use the Audio2Face to generate facial expressions do I have to create the Blendshapes previously from scratch? Eg: Creates Anger blenshap on Blender export and then A2F will read the audio and interprete it as angler and use my pre made blendshape? Or do AUdio2Face can sorta “rigify” the face and use its new or previous own blendshapes over the character? Can we use a blendshape preset?
Welcome to A2F! I would encourage you to take a peek at the tutorials by Charles:
It may not answer ALL of the questions you have listed, but it should give you a general overview on what you can expect following the workflow in Blender. I will let more seasoned users chime in on your specific questions (i am still learning A2F myself)
I am sure you may have already seen this, but if you do plan to upgrade or procure a machine with the intent to use A2F, here are the recommended spec for both laptop and workstation for professional use. In addition, this page will indicate the OS and mesh requirements before you commit to anything.
Some of the demo scenes would use around 6GB of VRAM, so having a beefy GPU is definitely preferred especially if you plan to do livesync with another app.