Animation Graph has nodes that can accept “Pose” data, including the PoseProvider node that can take as input joint translations, joint rotations, skeleton delta translation, skeleton delta rotation. Audio2face seems to be a combination of blendshapes and at least eye rotations, so was not sure how to use Animation Graph with Audio2Face to mix things together (either live or recorded audio2face output).
This will be possible in the future. But for now if you have the body animation, you can bring it in Audio2Face and apply face animations to it.
Here’s a tutorial from the Camila tutorial series:
Camila Asset Pt 6: Connect Character Setup Meshes to Drive the Full Body in Omniverse Audio2Face - YouTube
Thanks for the video. Yes, I have watched it a number of times. I am trying to work out how to use audio2face in a more realistic project. Imagine a scene in a movie where multiple characters talk backwards and forwards, with changing emotions and other facial animations (like winking) beyond what audio2face can do. BlendShapes make the most sense because you can merge in other facial expressions that audio2face cannot generate.
So my question is how to mix a combination of various animation sources (with blends) and blendshape animations (including from audio2face). Animation Graph is one approach - it has blends and various capabilities built in. I was trying to see if audio2face had any integration with that. Sounds like “not yet”.
Otherwise I will write my own Sequencer and blend all the sources together myself for a final “baked” animation clip (one per character, with all blendshapes and body motions). I need to work through the a2f APIs to work out how to achieve this (including the REST APIs to work out exactly what it supports).