How to apply animation in Blender

Hi, I’ve managed to give my character 46 blenshapes in blender using audio2face and watching this video

But I’m struggling to understand, how can I apply the animation to my character’s blendshaped face in blender?
I’ve watched this video

I’ve even managed to create blendshaped usd skel for my character, but when I’m trying to set up blendshape solve to my character in A2F Data Conversion>Blendshape Conversion, in Input Anim Mesh I can’t choose my character’s result file, that file is simply not clickable, I’ve tried to select the mark head animated template, but my usdskel mesh named base__Audio2Face_EX_neutral don’t get the animation on it as shown in the video, what am I missing?


1 Like

I’m also attaching usd project file, maybe it will be helpfull.
audio2face_test_13.usd (22.6 MB)

Any help here guys? This is almost impossible to understand on my own, there is no certain workflow on how to give head animation to your character.

1 Like

Hello!
I have also tried animation and have successfully placed the Audio2Face animation in Blender:

While it’s true that in Audio2Face you can export the animation as a usdskel, and then import it into Blender via a .USD model, Blender still doesn’t support importing only the UsdSkel animation.
Because of this, you have to go through an additional process in Omniverse Create, which is to remove the extra SkelAnimation from the tutorial you’ve shown, and change the Animation Source. (This step is from “Exporting to Unity using Blendshapes within Omniverse Audio2Face” video, From 3:35 to 5:32)

Exporting to Unity using Blendshapes within Omniverse Audio2Face

Currently it is only possible to import SkelAnimation animations into Blender with a 3D Model together. If you look in the Blender Connector documentation, it says that it is currently not possible to export “Absolute Shape Keys”. I’ve tried to import .USD files containing only the SkelAnimation, but they don’t import anything in Blender.


(User Manual — Omniverse Connect latest documentation)

In my experience I can tell you that it is much easier to import the animation as JSON and not as SkelAnimation.

First, you need a model that has the Audio2Face Blendshapes in order to export them as JSON. That model is the “Blue Mark” that is in the Omniverse folder, named as “male_bs_46.usd”. The following tutorial explains it Batch Audio Process Overview in Omniverse Audio2Face - From 0:32 to 2:00

Next, you need a Script to import the JSON file into your 3D Model.
The tutorial to do it is this: (https://www.youtube.com/watch?v=uocQdPF_2Hc&t=850s) - Starts at 14:10

The script used in the tutorial is this: Link of the Script

You must make a small modification to the script to make it work in Blender: In line number 48, in the “bs_limit = 68” part, you must change the number to the number of Blendshapes your model has. By default it is 46. It should be like this: bs_limit = 46

The script will load a small window, and to use it, you must:
Go to Object Mode in Blender > Select the Face Mesh of the Model with the Blendshapes > Go to the Addon window, and click on “Create Animation” > Load the JSON

1 Like

Thank you so much for the reply, you are the best. I’m just a beginner and struggle to understand most of you wrote :D , I hope I will have more time to examine your reply in details. However after testing a bit I’ve understand that I simply can’t apply animation to my blendshaped mesh, I’ve describe it in this topic

You reply is so detailed, I’m so thankful for your time waste on my problems.

I’m an absolute 0 with coding, for that script I only need to input the text of the script in blender console changing the line about 48 blendshapes to 46 and press enter? This will work? 😳

Hello! Thanks for your reply! I am grateful to help 😃
Yes, it will work! This script doesn’t require you to know coding.
Just remember that when put it into Blender, you must modify line number 48, (bs_limit = 68) and just change the number 68 to the number of Blendshapes your model has, which is 46 by default. If this number doesn’t match with your number of Blendshapes, the script won’t work.

The text of the script is placed in the Scripting window, which is in the top right menu of the Blender screen.
To do this, first, you must press the “Scripting” button. After, you must press the “New” button. Then, must input all the text of the script, and then press the Play (Run Script) button.
This will bring up an additional window inside Blender. To see it, you go back to the “Layout” menu which is in the top left menu, press N key to toggle sidebar menu, go to the tab called “Josh Tool”, and on the top right you will see a window that says “Import A2F Blendshapes” with a button that says “Create Animation”. Basically, that’s the script you just ran a few seconds ago.
With this window, you will be able to import the JSON animations from Audio2Face :)

Remember that to use it, you must first select the 3D Model with the 46 Blendshapes, and then press the “Create Animation” button.

I’m just amazed with the detail explanation you wrote, right now I’m having huge problems to make blendshape conversion in audio2face, when I solve this issue, I will try what you wrote. And also to be clear.

As I understand the method you wrote will help me to attach for example hundreds of animations to my character mesh? Do I get it right? And as I understand the official method of audio2face don’t allow you to attach several animation to one mesh ?

Why am I asking, cus all I need is to understand the workflow for my game development proccess in Unity.

Thank you so much.

I’ve could play the facial animation in Blender, but I can’t uinderstand how can I apply that facial animation on my character in blender? I have a full body character with it’s rig in Blender, and I want to apply about hundreds of animations on it, cus then I will need to use them in my interactive movie genre like game in Unity. Maybe you know some workflow for this? Thank you!

I’m so sorry, but it is extra hard for me to understand, it would be just amazing if you could do some video tutorial of a workflow from zero when you firstly prepare the character for importing in audio2face, then from audio2face you use the animation on your character in blender, and how I can switch between the animations? Finally I want to use those animation in my game in Unity.

Ladies and gentlmen this man is a HERO☝️🙌👏

It is just impossible!!!

I’m dump as a stone but I’ve managed to figure out this!!! Your text manual is just amazing!!! I’ve switched on my empty head and tried to understand the workflow you wrote, you described everything EXTRA detailed.

I’ve could import the animation to my character in blender and it’s ALIVE 🤣🤣🤣

The issue for me that remains is how can I manage this in Unity? How can I apply facial animation with those json files? Do I need to work with unity timeline?

Thank you so much sensei 🥋 I dream off to be like you one day🙏

I’ve rendered the result, but can’t understand why the lips are shaking

Hey,
thanks for all the useful information. The link of the script doesn’t work anymore. Would be great if someone can post a copy again. Thank you very much. :-)

Hey everyone!

I’m going over this workflow tomorrow in a livestream:

I hope it’ll answer all the questions posed here. It’s also a preview of the upcoming release of the A2F Add-on for Blender, with new features for getting animated clips back into Blender.

For folks on the current Omniverse release of Blender, you can get shapes over now! The workflow is demonstrated here:

About the lip jitter-- My expectation is that the source audio file has some noise in it, and the “silent” moments are not silent. You could fix the original audio file and retry, or you could zero the keys in the beginning in the Action editor for Shape Keys.

1 Like

Thanks Charles, watched about 40 minutes of the stream, tried to set the timeline forward to see the results, but at the end result I that dead eyes and no moving jaw, those are fundamental issues that is very hard to understand how to work with, sadly still we can’t see any finished workflow :(

Are you planning to make a video of a workflow on how to animate eyes, tongue, eyebrows and lower jaw? I’ve could make my characters face mesh to animate, but still can’t find any usefull workflow for animation head parts I’ve mentioned just yet.

For example here is a video with tested audio2face animation

The eyes of this character are moving, I just want to import the eyes animation from my character in audio2face into blender :(

Hi, hope you are doing well. Maybe you’ve saved the script somewhere, cus the link is broken for now, @sim.kottke also asks for it.

I’ve tried to type the script by hands, but the video resolution won’t let me see some letters. I’ve contacted the video uploader asking for the script, if he send me it I will post it here, waiting for his reply.

Hey zanarevshatyan, thank you very much. I also typed in the script i saw in the video, but unfortunately it is not the whole script. I can see the “Josh”-button in the N-panel, but there is no button wich says “import json”
Have you typed in the script we saw in Gina’s mum tutorial, or do you have another version? Maybe we will figure it out together :-)

Hey charley, thanks for the videos. Pretty cool workflow :-)
I followed the tutorial and everything worked out perfectly fine until the point i have to press the “import animation” button in blender, because i still have the “old” A2F-Add-on for blender. Can you tell us, when the new version will be released? or is there already a possibility to make it work? thank you very much