Hi, I’m doing everything like in the video below using my own makehuman (not metahuman) character
When I get to the part of blendshape conversion I get an error and can’t apply animation to my blendshaped character head.
I’m attaching 2 txt log files, I’ve tried doing this 2 times, for the second time I’ve restarted the audio2face program log1.txt (750.9 KB) log2.txt (617.7 KB)
Also attaching my project file in usd file format.
Thanks for providing the scene. Just tested it and things seems to work. But because the scale of your character is very small compared to the default meshes in Audio2Face, it’s hard to see the animation on the generated blendShape.
Here’s another thread that shows the same issue and has the answer.
It writes insufficient VRAM in log, maybe I can’t do it on my laptop? It is very sad cus I’ve managed to pass the Tensor RT building proccess and stuck here ;(
It seems you’ve done the setup correctly. You just need to select the “BlendShape Solve” node in the graph and setting these values:
Weight regularization: 0.001
Temporal Smoothing: 0.001
Weight Sparsity: 0.001
As you can see in the video I’ve uploaded I’ve put the values you’ve sent me, but the problem was in blender export part, I’ve needed to export my model using the omniverse addon’s “Export in USD” button, earlier I was exporting my model from audio2face export tab in omniverse addon. I’ve made the animation work in Blender, but now I can’t understand how can I play the animation on my character in Blender? I mean I have the full character with it’s body and rig in blender and I just want to apply the facial animation on it, how it can be done? Is there any workflow you can suggest?
And also I have a question about playing bulk facial animations on the character in my game in Unity. My game is an interactive movie, and characters will need hundreds of facial animations, what is the best solution for this? Using maybe Blender? But in the end I need it in Unity.
Thank you so much for your support, you are the best!
I believe your issues are largely solved by the next version of the A2F Add-on. I’ve added animation support, so you can bring in clips from Audio2Face.
If you’d like to test the new version, please DM me. :) Otherwise, the goal is to have it drop for GTC 2023.
This is a bit off topic, but is there any better/newer documentation available than this? Specifically something that describes the regularizers a bit better.
If this is not the case maybe you can answer my questions directly:
Is sparsity_regul_weight just a regulare L1 regularizer?