I’m trying to use audio2face to generate blendshapes that I can read in my app. What I need is -
- An audio2face livelink server
- Setup listeners for the server
- Send audio data to livelink
- Receive blendshapes in the livelink.
I understand that I would need to make some code changes in the audio2face build to get that working so that the API returns blendshapes as well. Wondering how do i go about setting up the listener that sends requests to livelink server and receives the response in Python.