I’ve looked on the forums here, and I found very similar issues, but none of the solutions resolved my issue, or I just don’t know how to implement the solution.
STEPS:
- Create USDC file in Omniverse embedded blender.
- Import the USDC file into audio2face.
- Set up meshes
- Mesh fitting
- Post wrap
- Preview in stage
- All blend shapes look like my default mesh shape
- Export my file
- Go back to blender, transfer shape data, it succeeded, but all blend shapes are the exact same.
- Go back to audio2face, check the console, a bunch warnings I don’t really understand.
SYSTEM INFO:
- Graphics Cards: NVIDIA GeForce RTX 4060
- Driver version: 31.0.15.3742 (up-to-date)
- GeForce Game Ready Driver: 537.42 (up-to-date)
DONT KNOW HOW TO DO (POSSIBLE FIX):
Generate the BlendShapes before applying the A2F pipeline. I don’t know how to even interact with the A2F pipeline, but I followed these tutorials exactly: https://www.youtube.com/watch?v=Etztivmjny4
ADDITIONAL INFO:
- Mesh has no eyes.
- Mesh has tongue and lower teeth, but for testing simplicity, I did not specify them in Lower Denture because I’m trying to remove any additional complexity until I get workflow operational.
- Unlike the tutorial, when I setup character, I get a gen_openMouth mesh that is not Mark. Don’t know how to have same template mesh as video.
- Log file:
audio2face_blendGenLog.txt (28.3 KB)
Attached is an image showing my issue.