Audio2Face Lip Sync issues

Hi Team,

I am facing an issue where the lip sync of a custom character Blend Shape Solve is not in sync with the audio, I tried Even after exporting the .json curves to Maya the sync doesn’t match. The fps is set manually to 24. It is really important and crucial to solve this issue, I Hope the Nvidia team will be able to help us solve this.

Steps Followed:
Step 1 : Character transfer (Successful)
Step 2 : Post Wraps (Successful)
Step 3 : Creating custom blend shape solve (Successful)
Step 4: Blink Test.

After all these steps I am still facing sync issues within audio2face. There is also another issue where the blend shape weights are not working even if they are connected to the nodes correctly.

Thanks
Jegan Devaraj

@jegan534 i am just another user passing by, but you could take a look at a few threads from before that had similar reports regarding syncing:

Hi @Simplychenable,

Thanks for sharing this, Will take a look.

@jegan534 I’ve just tested this and it seems to work as expected. (using json)
This is how mine shown in maya that length of keys is matching the audio length.

Where do you set the FPS when you export?
It should be here:

If the issue persists, can you share your audio file or recording of your steps?

Hi @esusantolim Thanks for taking up your time to reply but unfortunately I wont be able to share the audio file to you, since it is protected by NDA.

The steps I took was basically create a custom blend shape solve and I have tried multiple times creating this, my weights tuner doesn’t seem to work and I even checked in the lazy graph and it is connected in that as well.

Could you please help me clear the below question,

Is there a way to see Animation curves inside audio2face or Machinima ?.
This would greatly help us in analysing our issue.

Regards
Jegan Devaraj

There are no animation curves in Audio2Face. But A2F can generate animation curves by exporting a cache as USDA. You will then be able to inspect the keys using a text editor. Or in a USD viewer, e.g. USD Composer. I haven’t tried USD Composer to view cached keyframes though.

Regarding tuner, it’s originally created for streaming purpose only.
By default tuner output is not currected to drive the character you see in the scene.
If you look at the graph, notice that:

  1. The output of Tuner only goes to the StreamLivelink node. Which is not used unless you are planning to stream the result.
  2. The WritePrimAttribute node if what is writing the solved value to the SkelAnimation blendshape value. you can connect the output of FLoat Array Tuner node to the WritePrimAttribute node manually so you can have FloatTuner node driving your character.

To get Tuner weight to be exported in either json or usd, you will need to use the Advance Settings to pick the tuner node to export.

Also, in case you haven’t seen it, checkout this video on the Blendshape export video.
The instructions I mentioned above is covered in the video as well.
Audio2Face 2023.2: Blendshape Pipeline Improvement on Animation - YouTube