Hi there, I’m streaming audio to Audio2Face and streaming the animation out to another program.
I’m trying to control the emotion of the character also from the same python script that is sending the audio - I just cant for the life of me see how to actually add the ‘Emotion’ widget to my scene & for it to then affect my mesh.
Currently its generating a wav file, and sending that successfully to Audio2Face via gRPC, that is then going out live through the Live Link output to Blender. That side is working well. I want to essentially control the emotion sliders, which I believe I might be able to do via A2E/SetEmotion?
The problem is I cant see how to actually add the emotion sliders to the scene, I’m very new to Audio2Face and am building things up from the template scenes. I have a scene with a Streaming Player which is driving the default Mark mesh. I’ve seen in other template scenes with the non-streaming player other widgets under the player like Emotion, Auto Emotion, Pre-Processing, Post Processing. I think I need to somehow add the Emotion Widget to my scene and I guess hook it up to drive the blendershapes in the graph - but I’m unsure how to do that.
oh that’s perfect thank you - I actually didn’t even try and send the emotions as I thought I needed the widget - but that worked brilliantly thank you so much :)