Audio2Face Lip Synch Not Working

Hello,

I’ve gone through all of the steps to bring in the face of a DAZ 3D Genesis 8 character into Audio2Face. This character has previously been setup and rigged in Blender for use there.

I got everything to work up to and including the generation of Blendshapes, exporting those and then importing them into Blender as shape keys, as well as transferring them to the original mesh.

That all works fine.

However, when setting up the A2F Pipeline there’s an issue. When I attempt to set things up according to an NVIDIA Omniverse livestream from 2 months ago, the dialog box does not appear the same as the one from the livestream.

For Select A2F Core Type I select Full-Face Core and all of the additional meshes: Upper Teeth, Lower Teeth, Tongue, Left Eye and Right Eye should appear in the dialog box, but I don’t see them. Additionally even when I click the “YES, ATTACH” button, the player for the voice appears, but neither the Mark mesh, nor my fitted mesh, move with the audio.

I’ve attached a screen image of the A2F Pipeline dialog box.

I’ve followed tutorials several times, but I can’t figure out what’s going wrong.

I can provide files which might be helpful.

Thanks so much for any help you can provide. :)

Here’s a screen image of the Character Transfer tab as well as to show I’m including the additional meshes with the skin.

I figured out what was going on. The dialog box has apparently been updated since the tutorial videos and livestream that I watched.

So, I’ve been able to set things up correctly to get the lip synch working with my imported mesh! :)

2 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.