Hi,
I’m struggling to find an appropriate Audio2Face to Unreal workflow, with the goal to have 2 characters talking to each other in Unreal Engine 5.
For my workflow I’ve tried two methods. First I’ve tried the latest live link pipeline (as described in your MetaHuman Blendshape Streaming tutorial), but it’s not running smooth enough to make it a viable option for me. Even when I believe my PC has the specs able to run it (I have a AMD Ryzen 9 5900X 12-Core Processor 3.70 GHz with 64 GB RAM and RTX 3080 graphics card).
I’ve tried changing project settings in Unreal, changing the target hardware (to scalable) and my Engine Scalability to medium but it still runs unacceptably. Any settings I might have overlooked?
If my specs are too slow, I’m fine with exporting the data first in Audio2Face, but that proces also isn’t entirely clear to me.
- Do I need to export the emotion key separately? I have imported the Audio2Face blendshape conversion into Unreal, but the facial expressions still looks stiff. Even if it’s not ideal to compare the import result in Unreal with the Omniverse mesh, it feels like the Omniverse mesh is more expressive than the Unreal USD import.
- I did try to export the emotion keys separately in Audio2Face. But it’s unclear to me what I need to select to do that properly. I’ve tried to select each available layer in the stage menu. But with all layers I get the export error: “Please select only mesh prims”. Except the: mark_bs_arkit > neutral layer gave me another notification: “None of the export meshes is connected to an Audio2Face instance”.
That’s where my attempts have stalled. As you can probably tell I’m still an animation (and Unreal) beginner, and the challenge of learning this stuff is that the tech is changing so rapidly that many workflow tutorials I find online seem to be outdated. But I’me eager to put these tools to use. Hope you can help.
Best,
Rogier