Hello, I’m trying to produce lip sync animation with a2f on a custom mesh with no blendshapes and I find myself stuck.
At the end of all this, I need to import the lip sync either in Daz studio as fbx or json or in Blender but I just can’t make it work.
I have a working .usd file that I made (pic on top) following the “BlendShape Generation in Omniverse Audio2Face” tutorial.
The only difference from the tutorial is that the imported skel mesh (head on the right) has a different rotation and scale (scale 100) so I had to tweak the sensitivity to 0.001
Now that I have everything working, how do I move forward?
I have Face Mojo on Daz that let me import json or fbx mocap. Since I saw the Export as Json, I tried to do it this way but when I import the json nothing happens on daz (not sure this even is the right thing to do)
Is there a way to just get a fbx animation of the lipsync head?
As for blender (3.4 usd alpha), I’m also stuck because I don’t know how to copy the lip sync animation from the imported head to the model I want to animate.
From a2f I can either export the head with blendshapes but no lip sync or lipsync but no blendshapes. So I did the whole A2F process once again with the new skel head to have at the end the lip sync + blendshapes but once imported on blender, the blendshapes are present but they don’t change any expression (I did the same test with the skel head before the a2f process and they worked just fine).
Here are the files
TestForumUpload.usd (83.9 MB)