Connecting TTS is not so difficult.
- create a scene that has a graph of three nodes:
Audio2Face and a mesh to drive (e.g. default Mark)
- make sure connections are set:
AudioPlayer time →
Audio2Face → points → mesh
- create a custom extension with your TTS service, where for example a button click callback looks up graph nodes and sets the audio stream on both player and a2f node and start playback.
Here’s a sample snippet, where
audio_buffer is your audio buffer (here assumed to be
int16 and 48khz sample rate), and
AudioPlayer node instance and
Audio2Face node instace:
audio_data = np.frombuffer(result.audio_data, dtype=np.int16)
audio_data = (1.0 / 32768.0) * audio_data.astype(np.float32)
track = omni.audio2face.core.a2f.audio.AudioTrack(audio_data, 48000)
When using mesh transfer, the graph is more complicated, but the same mesh is being driven, so the core concept does not change.
Note that 48khz will be resampled, Audio2Face expects 16Khz as far as I can tell.