Export lip sync animation as a video file in pythonic way(automation) to use the service on demand

Hi, i would like to render the lip sync animation file on web. For this, i have used audio2face(used headless APIs) + blender to generate the lip sync animation and want to finally export the animation as an mp4 video on blender following the tutorial series Part 1 and 2 Omniverse Audio2Face and Blender | Part 2: Loading AI-Generated Lip Sync Clips - YouTube. But no luck in exporting the video output from Blender using “Render Animation”. Also checked for other options below:

  1. convert the a2f usd cache to a glb file to render it on web - challenges faced: lost skin/textures in the visible meshes usd cache and conversion to the glb file misses animation
  2. import usd cache into machinima to export the animation as mp4. but to automate this i didnt find any pythonic way/REST APIs. Does machinima support automation using “Movie Capture extension”?

Can someone please guide me?

Ref: How to export an audio2face as a video file?, Audio to face quick render option? - #3 by nic.tanghe

Hello and welcome to the forums @snehareddyp

Is your animation fully functional in Blender? If so, are you planning to convert it into a video file for uploading to the web? or, do you intend to live stream the animation directly from Blender to the web?

Or your animation is not functioning properly in Blender?

Thanks for the reply Ehsan.
Yes, the animation is fully functional in the blender. The plan is to output a video file (preferably one with transparent background) to render it on the web.

This issue seems more like a blender rendering issue. While it is possible to render things in Omniverse or Machinima, for your use Blender seems a better fit.

Here are some tutorials I found on YouTube showing how to render animations in Blender:
How to Render Your 3d Animation to a Video File (Blender Tutorial) - YouTube
How to render animation as video in Blender 2.92 - YouTube

Thank you. I have followed the steps in the vt videos to export a final video file. Im facing a few last minute issues with not able to see the emotions on the animation

In the a2f - I generated the auto emotion keyframes in Auto Emotion. The emotions (and eye brow movements) are visible on mark’s face but not on the target skel(rain). The target skel only shows the lips movement and the eye blink is missing on the rain even with blink interval set to 3 seconds
Is this a limitation of the blender connector or can we render emotions(animate the brows and dynamic head movements) by adding any further steps?
Reference: Audio2Emotion Feature in Omniverse Audio2Face ~ Ai Facial Animation ~ Eyes, Skin, Tongue & Mouth - YouTube

This is the final output file which doesnt show the emotions
Uploading: 0448-0733_out.webm…

Hi @snehareddyp

Are you able to share your scene file? I’m not able to open the video file you’ve attached. Is it possible to change its format?

Usually if eyes and brows don’t move, it means you’ve selected regular core instead of full face core. Can you confirm if this is set to full face?

Hi Ehsan, Sorry for the late reply. Im back from my planned leaves today. I have opted to attach the full face components and here is the output file attached. Pls check if you can access this. Im blocked with the static eyebrows / their distortion issue currently


Have you tried attaching eyebrows to face using Prox UI?

iam following the exact steps from this tutorial series
Audio2Face with Blender | Part 1: Generating Facial Shape Keys - YouTube
[Audio2Face with Blender | Part 2: Loading AI-Generated Lip Sync Clips - YouTube]
(https://www.youtube.com/watch?v=TIkpVjKEkvY&list=PL3jK4xNnlCVfIJU10iQ_dd8J_5T_e0YTJ&index=2&ab_channel=NVIDIAOmniverse)
https://www.youtube.com/watch?v=mUX6FId0UyA&t=2612s

Can you share the steps on how to attach eyebrows using prox UI on a2f or if this should be done in the blender?

It looks like the eyebrows haven’t been setup. To do this, you need to add eyebrows as Dynamic Meshes. Then after the setup you need to attach them to the face skin using Prox UI.
Take a look at this : Setting up a Custom Character in Omniverse Audio2Face - YouTube

I strongly recommend watching the whole video, but if you just want to see the eyebrows, it’s at 6:00 time stamp…

The Dynamic Meshes are already added previously. Now i have attached them to the skin using prox UI

But there is no luck. Is it because there is only one eyebrow mesh(as shown in the screenshot). Do we need to export separate eyebrow meshes for L&R in the initial blender steps when exporting them as dynamic meshes before the a2f step? Can you share any video for this step of separating the eyebrows to L&R if any?

Eyebrows don’t have to be separated.
The scale of your character seems to be very different from the default A2F character. Can you match them before doing the character transfer?