So this must be possible, but I can’t figure it out. How do I get audio streaming to Audio2Face with Audio2Gesture. They seem sadly so close together and yet so far apart. I can stream audio to do the face in Audio2Face - but I can’t seem to get gestures in there. Trying to add the extension then wants anim.graph extension as a dependency and I start getting permission errors from there. It doesn’t feel like it’s meant to happen.
And gestures are in Machinima, but what I’ve seen says you export a usd cache with pre setup stuff to say to go there. Though you can stream to gestures. Perfect. I can stream to both, just what I want. How the heck do I bring that together?
I thought that gave away the answer there for a minute - of course, just load the audio2face USD into Machinima. But then I got in there and all the audio connections where gone in the graph, which seems resolvable, but then the core audio2face component is nowhere to be found even if you did so, and it seems to be a key player.
You’re right. I apologize for the confusion.
While mixing Audio2Face and Audio2Gesture is not possible in Machinima at the moment, they might become available to the public in one package in the future.
Bummer, it’s hard to imagine the intention was not that they coexist for a virtual avatar. ChatGPT now demands it. I hope it comes soon. I don’t know what I’m more bummed about - this or character colliders not following their animation yet.
Since bringing gesture into face failed me, I did try bringing face core into machinima, and I thought I had it, the graph connections were there, the Audio2Face core component was there, it all looked good from the graph perspective, the audio played, but for the life of me, I couldn’t get it to animate Marks or the face he was drivings faces. I didn’t do a detailed comparison, but a quick look seemed to show identical graphs and connections.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.