Blendshape Conversion With Emotion Not Working

I am exporting my blendshape weights in JSON from Audio2Face for use in Maya. If I export a face with no emotion values, the bendshape values look fine in Maya. If I add emotion like joy and amazement, my usdSkel looks right in Audio2Face but the exported JSON weights only seem to affect one side of the face in Maya. Comparing the resulting keyframes in Maya shows many keys for one blendshape side (say lowerLipDepressorL) but very few for the other side (lowerLipDepressorR). I am using Audio2Face 2023.1.1

Does the animation look correct in Maya or the animation is also broken?

I assume you mean the keyframe animation on the blendshapes in Maya? They look as I would expect with one side of the face moving as expected and the other moving much less than expected. If I import the keyframes for the non-emotion version, the face motion looks better but I still think something is off on the right side vs. the left. It’s just a lot more noticeable when emotions are added.

I tested this and the animation looked correct. The values lowerLipDepressorR and L were different at some point in the animation, but it was not noticeable without looking at the animation values. This difference is mainly because the data is trained on real human face which is not %100 symmetrical. But if the animation looks off for you, there’s definitely an issue. Is that’s the case, could you screen capture a video showing the issue and maybe sharing your file so we can investigate?

I have captured videos and saved the audo2face file. How would I send them to you? The audio2face file is 22mb.

@stu9354 you can probably just DM Ehsan directly 🙂

Thanks I tried that but it does not let me attach a usd file so he will need to give me another way to send files to him.

ah, zip the file up and dump to Google drive/cloud before sharing the link in a DM would be my next step, perhaps.

Thanks for sending the files @stu9354

I redid the setup from scratch and things seem to work with Joy set to the highest value. I checked your video and it’s very strange that your Maya version is asymmetrical.

Here’s the order of operations:

  • First generate blendShapes right after doing the character transfer (but before applying the A2F pipeline)
  • Then, you can apply A2F pipeline, to check how your deforming face moves. Please note that your deforming face does not have any blendShapes. A2F moves the actual points of your mesh.
  • Then, you can bring back the blendShape file you exported in the first step and create a new BlendShape solver from A2F Data Conversion tab. After this step, you should see both heads (your original head and the newly imported head which has blendShapes) move correctly.
  • And finally, you can export the blendShape weight animation from A2F Data Conversion tab as Json and import it in Maya.

1 Like

That seems to work. The order that you do things must make a big difference. Before when I did it, both the original and imported usdSkel looked right in Audio2Face but the exported weights were wrong.