How to listen to audio2face livelink server in Python?

Hello there,

I’m trying to use audio2face to generate blendshapes that I can read in my app. What I need is -

  1. An audio2face livelink server
  2. Setup listeners for the server
  3. Send audio data to livelink
  4. Receive blendshapes in the livelink.

I understand that I would need to make some code changes in the audio2face build to get that working so that the API returns blendshapes as well. Wondering how do i go about setting up the listener that sends requests to livelink server and receives the response in Python.

  • You could use Audio2Face’s livelink extension as a starting point and reference for creating the custom streaming extension. It can be found in a folder similar to this: C:\Users\<UserName>\AppData\Local\ov\pkg\prod-audio2face-2023.1.1\exts\omni.avatar.livelink

  • You could also use A2F->UE livelink plugin as a reference for receiving blendShape weights. It can be found in a folder simliar to this: C:\Users\<UserName>\AppData\Local\ov\pkg\prod-audio2face-2023.1.1\ue-plugins\audio2face-ue-plugins

1 Like

Thanks Ehsan, I’ll try that