Audio2face into unity

Checking to see if audio2face and unity have a solid relationship? I’ve got some AI APi’s into my unity project and want to provide a wizard of Oz style avatar to make the experience even more wild.

I understand blend shapes and imports, but was hoping to embed some “live” or dynamic functionality to the project.

My end goal is to livestream “new” audio data into my project and have the avatar respond with Audio2face quality.

Am I jumping the gun here, and its not possible to use audio2face in a “live” dynamic environment? Do I have to have all my audiobeats pre-“baked” and imported? Where is the bar here? Ty TY love the tools

I believe it’s possible. Thinking it through, you’d likely stream new audio into an external Kit App, have it generate the new shapes, and have the blendshape weight streamed back into Unity. At present we don’t have any such examples to my knowledge, but I can bring the request back to the team.

For the moment, though, I would definitely recommend pre-baking the animations and audio, then triggering them through Unity scripts.