Hi, I am trying to learn audio2face. I can use the A2F Data Conversion, however, the generated values are almost zeros.
Input:
One human-like face with custom blendshapes
Expected output: audio with generated custom blendshape weights from A2f Data Conversion.
Steps:
-
Character Transfer → Add SKIN TEMPLATES
-
import human-like face.use to the Stage. Then, Character Transfer → CHARACTER SETUP, choose my impoted mesh and Click Setup Characeter.
-
Character Transfer->SKIN MESHS, set correspodnence, begin mesh fitting, begin post warp.
-
Audio2Face Tool, Add A2f PiPELINE and attach to the a2fTemplates/mark. Play the audio, my human-like face looks very similar to the mark.
Then I want to exploit A2F Data Conversion to generate custom blendshape weights.
5)A2F Data Conversion-> Set Input Anim Mesh as /World/transfer_characeter/Mesh_result, Set Blendshape Mesh as my original Mesh. Then, SET UP BLENDSHAPE SOLVE.
Finnaly, EXPORT AS JSON. I can get one json file, however, the sovled blendshape values are almost zero. This is not correct, because my human-like face can open face in the A2F Audio2Face play mode.
Do you have any suggestions to debug this?