Audio2Face blend shape streaming for any character in Unity3D

How to make real time Audio2Face blend shape streaming work for any character in Unity3d (Just like it works for metahuman) ? Did NVIDIA release any plugin for unity3d (like LiveLink in unreal engine )? how to make character compatible to work with this plugin ? How to set up rest API pipeline end to end for Unity3D ? is there any tutorial or reference document available ? goal is to create real time character lip sync and facial animation based on audio. i am not with the devs, but by looking at some of their responses, the general idea is there isn’t a ready-made tool at the moment to make streaming work. However, if you are more savvy in doing some more custom coding, you could review the tool that’s made for UE and reverse engineer it into something for Unity.

1 Like