Unable to load animation from audio2face into unreal, please help

export from audio2face
mama_lucha_mouth_capture.rar (64.7 MB)

Thank you! I will look into this.

1 Like


So, what I see with the file you uploaded is that it is a usd cache file. Unreal doesn’t import this correctly yet, although they might have fixed that in 5.1. I was able to open the file with Unreal’s USD stage and play it back. However, for Unreal to treat this as a skeleton the character needs a joint. Also, on the A2F side, this was not setup as an exported USD animation clip that was then applied to the skeleton. The steps in the A2F tutorials have you do the character targeting, then create the blendshape solve. Once you have the blendshape solve, you can export the USD animation. After you have a saved USD animation clip, you would re-open your character in A2F (that has a joint) and apply that separate USD animation clip to the character by dragging it into the Stage then onto the character in the viewport. This can be saved as a new USD file that can be imported into Unreal. So that is 3 USD files (the base character/the animation clip/the combination of the two). Let me know if you’re following. Thanks!

wow. that is such an unpractical and convoluted workflow. I guess this app works well only with a metahuman. I tried loading my character in Omniverse, machinima, and also imported the USD animation made in audio2face to see if it could be an easier workflow. Using the toolbox I loaded the animation cache into my characters original face. The animation works fine, but the problem is, when I load the animation cache, the face drops to the ground, and becomes detached from the bodys animation. How can I fix this issue ?

Hello @falconking. Yes, the current setup is really created to work easily with metahumans. We have plans to update this on our roadmap to allow an easy workflow with custom characters in engine as well. As fat as the issue you are showing here with Machinima, can you walk me through the exact steps you took? Thank you!

Yes. I exported the animated and rigged model from blender as USD using the blender version from omniverse. then I loaded the character on machinima and loaded the USD animation from audio2face. Using the toolbox I loaded the animation cache from the face to the character`s face. the face talks, but now is on the ground and detached from the animated body.

I managed to keep the head in the right place, but when I copy the animation cache using the toolbox, the head becomes detached from the skeleton. How to keep it rigged to it ?

Hello @falconking. The usd with facial animation should not be a cache if it was exported as blendshapes. I know this has been a frustrating process for you and we plan to help streamline this workflow. However, in the meantime, if you can upload the full character as a USD with no animation
 I will attempt to walk you through a workflow within Omniverse.

Thank you. I will upload it without the animation.

Here is the model. I had to upload it in a cloud storage because it was big to do it here.
MyAirBridge.com | Send or share big files up to 20 GiB for free

Thank you, received. Will look into this.

Hello @falconking I was able to attach the cache without the location changing. When you export the cache file select Rigid Xform as keys. Since your head is within a hierarchy, I think it’s losing those values when you use the toolbox point transfer.
image

Let me know if it’s working for you.

Thanks!