Thanks for providing the scene. Just tested it and things seems to work. But because the scale of your character is very small compared to the default meshes in Audio2Face, it’s hard to see the animation on the generated blendShape.
Here’s another thread that shows the same issue and has the answer.
It seems you’ve done the setup correctly. You just need to select the “BlendShape Solve” node in the graph and setting these values:
Weight regularization: 0.001
Temporal Smoothing: 0.001
Weight Sparsity: 0.001
As you can see in the video I’ve uploaded I’ve put the values you’ve sent me, but the problem was in blender export part, I’ve needed to export my model using the omniverse addon’s “Export in USD” button, earlier I was exporting my model from audio2face export tab in omniverse addon. I’ve made the animation work in Blender, but now I can’t understand how can I play the animation on my character in Blender? I mean I have the full character with it’s body and rig in blender and I just want to apply the facial animation on it, how it can be done? Is there any workflow you can suggest?
And also I have a question about playing bulk facial animations on the character in my game in Unity. My game is an interactive movie, and characters will need hundreds of facial animations, what is the best solution for this? Using maybe Blender? But in the end I need it in Unity.
Thank you so much for your support, you are the best!