I’m a VR developer, and my company wants to use Audio2Face in the CryEngine.
I noticed that the workflow of Audio2Face requires the user to:
- Record or Stream Audio
- Recording data is taken and a facial animation is created.
- Export the animation and use it in Unreal, Maya, Blender, etc.
Is there a way to just upload an audio file, without the GUI, and get a response back through something like c++ instead of needing to export it each time?
You can do almost what you want.
There are going to be live sync support for popular character apps in the works, coming soon.
What that means is, you can stream audio in, and as we shown in some of the tutorial you can do that through script. The setup and running of the character you still need the app at the moment. We have not tested headless enough but that might not completely work. But then if your setup already has blendshape conversion setup, you can then stream the blendshape keyframes to your engine similar to ARKit’s keyframe stream from their tracker.
Then you can have live sync in your engine with streaming audio done programmatically.
Check out this tutorial to get you some idea: