Transferring eye,tongue and jaw animations to blendshape model

Hi there. Is it possible to transfer the eye,tongue and jaw animations to a blendshape model? So far i have only had success transferring the face. when i include the others i get an error. it seems that when i use blendshape generation to create a mesh it only exports the face…
Thanks.

We deal with 3 different types of animation when using Audio2Face:

  • Face mesh: vertices move around, so we can generate a blendShape using this.

  • Eyes and Teeth: vertices do not move, only their tranform values (translate, rotate) changes. So we don’t really need blendShape for them.

  • Tongue: the most complex…, both transform and vertices are animated. But we can generate blendShape for tongue using BlendShapeSolve, just like the face. You will need to create a few blendShapes for the tongue that covers almost all possible motions, e.g. TongueUp, TongueDown, TongueCurlUp, TongueCurlDown, TongueCurlLeft, TongueCurlRight, TongueStretchOut, TongueShorten, and possibly more if you need more detail. Then you can use the Data Convertion tab and generate blendShape animation weights

Hi thanks for the reply. The reson I ask is because I need the teeth to be included in the blendshapes in order to use them in unity. If I cant include the teeth/jaw in the blendshapes, how would i go about exporting the transform data so that they can be used? Many thanks.

Jaw and teeth should be relatively easy to setup as there are only 4 jaw movements in arkit, JawForward, JawLeft, JawRight, JawOpen. This means you can create a simple blendShape for your lower teeth in any DCC e.g. Maya. Then connect the face blendShape weights to this new jaw blendShape weights.

For tongue, it’s a little different. You can solve blendShapes for it just like you can solve for the face.

In conclusion, you will need 3 different blendShapes: face, tongue and lowerTeeth. The first 2 can get created using Audio2Face’s A2F Data Conversion tab. But you will need to create lowerTeeth blendShape node manually.

I want to create tongue blendshapes in Maya and use them in the BlendShapeSolve. How would I name the tongue blendshapes so the BlendShapeSolve could use them for the tongue? I know that the face solve expects a certain naming convention to work in the BlendShapeSolve. I assume I would export the tongue with blendshapes as USD from Maya and drag it into the stage in Audio2Face.

The name of the blendShape targets do not matter, even for the face. But the generated animation must be applied to the same blendShape node with the same targets.