Real-time Audio2Face to Unity

I have to create a digital human assistant, the NLU and NLG parts are already done and I was trying to find out how can I use the Audio2Face features to lipsync the digial human model with the text or real time audio streaming generated by the NLG.
Is there a workflow to visualize in real time in Unity the facial animations generated by Audio2Face?

Unfortunately, currently we don’t have this.

There are 2 steps into make this possible:

  • A2F being able to stream blendShape weights. We’re hoping to have this in the coming releases.
  • Unity being able to read those values. A new Unreal Engine connector is going to be released soon which make it possible to read Audio2Face streaming values. This will eventually be done for Unity too, but in the meantime, this OV streamer app can be used as a source for people to figure out how to build their own unity version.

Thank you for your reply.

How much time do you think it will take to release those features?

To be honest, it’s not easy to predict when all of these are going to happen. But Unreal Engine connector should get updated in the coming days.

I’m happy to say that the first part of the problem is resolved in the latest Audio2Face 2023.1.0 and we can stream blendShape weights.

The 2nd part can be done by any user or by the Unity team.

Thank you for the pleasant update.
Best regards

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.