Hello. I’ve managed to generate blendshapes on my character and import animation data from Audio2Face to Blender, but I am missing one last piece of knowledge to have successfully followed an A2F → Blender pipeline.
The animation frames that I’ve imported into Blender don’t appear to match the audio clip that was used to generate the animation. Before exporting the blendshape weights, I had set the frame rate to 24fps in A2F. Then I imported the animation and made sure my blender scene was 24fps. In a perfect world I would be done, but for some reason the animation is much slower than the audio.
Does anyone have any advice on this topic? Thank you!