So this must be possible, but I can’t figure it out. How do I get audio streaming to Audio2Face with Audio2Gesture. They seem sadly so close together and yet so far apart. I can stream audio to do the face in Audio2Face - but I can’t seem to get gestures in there. Trying to add the extension then wants anim.graph extension as a dependency and I start getting permission errors from there. It doesn’t feel like it’s meant to happen.
And gestures are in Machinima, but what I’ve seen says you export a usd cache with pre setup stuff to say to go there. Though you can stream to gestures. Perfect. I can stream to both, just what I want. How the heck do I bring that together?
Setup 2 files, one for Audio2Face and the other for Audio2Gesture. It’s critical that the player node has the same path in both files. i.e, /World/audio2face/Player
In a new file, import both these files as separate layers.
Wait for both Audio2Face and Audio2Gesture TensorRT to load fully.
You should now be able to see one audio player in Audio2Gesture (or possible in Audio2Face) tab.
You will need to connect face animation from Audio2Face to the final head geo from Audio2Gesture using deformers. i.e, prox, GetPoints/SetPoints
I thought that gave away the answer there for a minute - of course, just load the audio2face USD into Machinima. But then I got in there and all the audio connections where gone in the graph, which seems resolvable, but then the core audio2face component is nowhere to be found even if you did so, and it seems to be a key player.
You’re right. I apologize for the confusion.
While mixing Audio2Face and Audio2Gesture is not possible in Machinima at the moment, they might become available to the public in one package in the future.
Bummer, it’s hard to imagine the intention was not that they coexist for a virtual avatar. ChatGPT now demands it. I hope it comes soon. I don’t know what I’m more bummed about - this or character colliders not following their animation yet.
Since bringing gesture into face failed me, I did try bringing face core into machinima, and I thought I had it, the graph connections were there, the Audio2Face core component was there, it all looked good from the graph perspective, the audio played, but for the life of me, I couldn’t get it to animate Marks or the face he was drivings faces. I didn’t do a detailed comparison, but a quick look seemed to show identical graphs and connections.