How do you add an idle animation to the whole body including head, I’ve managed to import a model with animation, which works for the body, but doesn’t work on the head when using Audio2face, how do i apply that animation to the head as well?
he / him | LinkedIn | Discord ID: Prof E#2041
Omniverse: “A New Era of Collaboration and Simulation” (video)
Omniverse Create’s latest features show & tell: Twitch livestream.
As seen in VentureBeat | Physics Showcase
PLEASE NOTE: If you are reporting a bug/issue, please provide OS, GPU, GPU Driver, the version of the app, and full log file (if applicable). For crashes, please zip and provide a link to your logs → C:\Users\ [YOUR NAME] \ .nvidia-omniverse\logs
Thanks, its driving me mad lol. I cant seem to find away to add the animations to the face once it has been setup for Audio2face. The plan is to add an idle animation to the model and use the live record feature for lip sync. I’ve been through all the tutorial and haven’t found anything that explains this yet.
I’ve managed to add the body movement within Ausio2face but cant get the head to work, is there no way of adding the head mesh once audio2face is applied back on to the skeleton? or even just attach it to the neck bone for some subtle movement? alternatively can you run the live recording with audio2face thou either Machinima or create? we are trying to make a chat bot the runs from live speech from the mic.
Hi @look3dstudio, we don’t support head motion + audio2face, nor live audio2face inside Machinima or Create yet, but they sound like good future directions. The only way you can add head motion in the current version of Audio2Face is 1) generate facial animation in Audio2Face, 2) export as usd cache, 3) combine that with head (or body) motion.
I wonder how you managed to add the body movement within Audio2Face? That might be helpful to understand your question better. Does it work together with facial motion from Audio2Face? Any video result you can share?
I have actually come up with a work around solution that will also give us the results we want, we are using Iclone and motion live and have setup the Live face app on the iPhone so the camera captures the real-time lip-sync from the default characters face, doing this allows us to animate our characters in Iclone in real-time, and get the lip sync from a microphone input, allowing us to create a chatbot. Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
Impressive work, great job !
I am trying to make a virtual anchor too, using OBS and A2F as output source.
What would you like to the final output source?
Hi @look3dstudio,
Yes, you should be able to connect any animation to drive the whole body animation if you have your character exported as usdSkel.
if the character head mesh has matching topology to the head mesh in the usdSkel, then you can simply select the a2f driven head mesh, then the head mesh in the USD skel, and use Toolbox>>BuiltIn>>Mesh>Connect Points
It applies to other meshes on the face like beard if you have them driven using wrap.
This similar workflow in this tutorial How to Use Audio2Face in NVIDIA Omniverse Machinima - YouTube
Difference is that if you want to keep Audio2Face live, then you do everything in Audio2Face app without exporting the result as Cache.
If your A2F driven head mesh has drifferent topology than the one in your usdSkel, for example, the head is cut off in A2F, while the usdSkel one includes the torso as well, then instead of using the ToolBox Connect Points, you can drive them using the WrapUI instead and enable the maxDistance so A2F doesn’t affect the rest of the body.