What is the workflow for animating speech on a metahuman with Audio2Face

I am trying to integrate speech animation on metahumans for a game in Unreal Engine. I have noticed people struggling with exporting metahumans and importing them in audio2face due to “grooms”. Is this the only workflow offered by audio2face, or is there any way to use the software inside of Unreal?

I guess my question is, is the only way of speech animating a metahuman by importing it into audio2face and then exporting it back into Unreal?

Hello @corinadowfc! Welcome to the community! I’ve let the development team know about your post. Thanks for reaching out!

Hi @corinadowfc, welcome to Audio2Face.
We’re also investigating the workflow between Audio2face and UE/MetaHuman. Can you please describe what is you current approach on your post here? Is it based on the Omniverse UE connector?

people struggling with exporting metahumans and importing them in audio2face due to “grooms”

And yes, at the moment, the only approach is importing metahuman asset into Audio2Face and then exporting it back to UE like you said. But we have a plan to make a live connection in the near future. We will update here once that’s ready.

In the meantime, it’s worth checking out the latest Audio2Face 2021.3, as it supports blendshape solve function, and driving MetaHuman rig using the blendshape weights can be an option. Thank you.

Hi yseol,

Thank you for your response.

I am looking to implement the ability of using a live mic in conjunction with metahumans, so it would need to be implemented within Unreal to give the user of the game access to this functionality