Transferring eye,tongue and jaw animations to blendshape model

Hi there. Is it possible to transfer the eye,tongue and jaw animations to a blendshape model? So far i have only had success transferring the face. when i include the others i get an error. it seems that when i use blendshape generation to create a mesh it only exports the face…
Thanks.

We deal with 3 different types of animation when using Audio2Face:

  • Face mesh: vertices move around, so we can generate a blendShape using this.

  • Eyes and Teeth: vertices do not move, only their tranform values (translate, rotate) changes. So we don’t really need blendShape for them.

  • Tongue: the most complex…, both transform and vertices are animated. But we can generate blendShape for tongue using BlendShapeSolve, just like the face. You will need to create a few blendShapes for the tongue that covers almost all possible motions, e.g. TongueUp, TongueDown, TongueCurlUp, TongueCurlDown, TongueCurlLeft, TongueCurlRight, TongueStretchOut, TongueShorten, and possibly more if you need more detail. Then you can use the Data Convertion tab and generate blendShape animation weights

Hi thanks for the reply. The reson I ask is because I need the teeth to be included in the blendshapes in order to use them in unity. If I cant include the teeth/jaw in the blendshapes, how would i go about exporting the transform data so that they can be used? Many thanks.

Jaw and teeth should be relatively easy to setup as there are only 4 jaw movements in arkit, JawForward, JawLeft, JawRight, JawOpen. This means you can create a simple blendShape for your lower teeth in any DCC e.g. Maya. Then connect the face blendShape weights to this new jaw blendShape weights.

For tongue, it’s a little different. You can solve blendShapes for it just like you can solve for the face.

In conclusion, you will need 3 different blendShapes: face, tongue and lowerTeeth. The first 2 can get created using Audio2Face’s A2F Data Conversion tab. But you will need to create lowerTeeth blendShape node manually.

I want to create tongue blendshapes in Maya and use them in the BlendShapeSolve. How would I name the tongue blendshapes so the BlendShapeSolve could use them for the tongue? I know that the face solve expects a certain naming convention to work in the BlendShapeSolve. I assume I would export the tongue with blendshapes as USD from Maya and drag it into the stage in Audio2Face.

The name of the blendShape targets do not matter, even for the face. But the generated animation must be applied to the same blendShape node with the same targets.

I am now using 2023.2.0. I have the tongue blendshapes in my USD skel being driven by the solver. I only see the tongue tip animated in the result mesh but the input driver mesh shows other parts of the tongue moving. You say the blendshape target names don’t matter, but how does the solver map the input mesh movement to the appropriate output blendshape mesh names?

I’m not quite sure why only the tip of the tongue moves in your case. But the BlendShape names of the tongue (or the face) don’t matter. You can confirm this by solving BlendShape weights for both 46NV shapes and 52Arkit shapes. The solver doesn’t have any idea about the shape names, it triggers all shapes and compares the result with the final deforming mesh, if they’re close enough, it figures that triggered shape must be the correct one.

For the face blendshapes I use the ones generated on the character transfer tab so those are deformed in the way that Audio2Face expects when they are solved. But there is no tongue equivalent so I created my own deformations. I am guessing that my deformations don’t match up with Audio2Face’s deformations except for the tip up and down. How can I make my tongue deformation meshes be what Audio2Face will be comparing to for the solve?

Take a look at your deforming tongue in Audio2Face and compare it to your tongue’s blendShapes. If you believe all the shapes and forms of the deforming tongue is achievable by your tongue’s blendShape, then it should be good. Here’s a list of blendShapes in my test and it seems to cover all frames pretty well. Like to video

Screenshot_5

Thank you for explaining this. but i am still left wondering how to use blendshapes i made in bleder, in a2f, there are many tutorials, but there is lacking a tutorial for this workflow of the jaw and tongue.
There are people like me who have wondered the same thing, here is an image of the comments on yt. Thanks for your consideration, i know this is a tool you guys have wanted to make available, i think it will be used more if another tutorial is provided.

Not sure if I understand the question here. So you’ve made all the 52 ARKit blendshape targets for your character in Blender already and want to animate them?

The face blendshapes work in blender, and i manually made the lower jaw blendshapes in blender match the ‘jawdrop’ ‘jawthrust’ etc, it ends up being more than 52 blendshapes, but i just don’t know how to use them again back in a2f.

I was just confused how to complete the jaw and tongue animations in a2f after making those extra blendshapes on my own in blender, i’m sorry i didn’t clearify that.

You can use the Character Transfer tab to drive your mesh with blendshapes using your mesh that is driven by A2F.

Here are 2 part tutorial:
Omniverse Audio2Face Drive Character Using ARKit Blendshape Solve - Part 1 - YouTube
Omniverse Audio2Face Drive Character Using ARKit Blendshape Solve - Part 2 (youtube.com)

1 Like