Audio2Face 2022.1 (open Beta) Released

Audio2Face 2022.1 (Open Beta)

Audio2Face 2022.1 comes packed with new features that allow you to create a fully animated face performance. Features such as Audio2Emotion, which analyses the audio and automates the animation of emotions into the characters performance. The new Full-Face Neural Network also animates all the features of the face including Eyes, Jaw and Tongue. The new Character Setup feature brings a more intuitive workflow that assists the user in the setup of their assets for the Retargeting process allowing for a full Face character Transfer.

New Features

Emotions.

  • Audio2Face users now have the ability to direct the emotion of their Avatars’ performances over time. The new Full-face Neural Network has been trained with a range of emotions like joy, amazement, anger, and sadness. The user can engage and animate these emotions through a set of emotion controls or automate the process with Audio2emotion.

Audio2Emotion:

  • Audio2Emotion is a Neural network that detects the emotion of a character from an audio performance - It will generate the emotion states of the character in Audio2Face and Keyframes the Emotion sliders for you. If you wish to further adjust the keyframes beyond the Audio2Emotion generated keyframes - You can do so by editing the generated keyframes manually.

Key Framing.

  • A simple Key framing UI is provided in the Emotion Panel and the Post-Processing Panel allowing for a higher precision resolve of your animated character. Post processing Parameters also allows the user to adjust the position of the teeth on the jaw – the convergence of the eyes and many other useful deltas to optimize the performance. All post processing deltas can be keyframed allowing for adjustment throughout the performance.

Full Face Character Setup.

  • As setting up a Full-Face character is a bit more complex – The Character Setup UI aids the user in setting up their character meshes and connecting them to an Audio2Face Pipeline. With an Intuitive interface for connecting Teeth, Gums, Eyes and Tongue - the application will check the assigned meshes for compatibility and provide guidance if an error is found. Once initial mesh setup is done - Running “Setup Character” will make all the mesh connections to the Full-Face pipeline and graphs running in the background. From here setup the correspondence for the character retargeting.

Mesh Tools.

  • To help the user further we have provided additional tools such as “Mesh Separate”, “Mesh Extract by Subsets” and Freeze Transformations. The Character Setup Interface will prompt the user when it finds an incompatible mesh and point you towards these tools to resolve.

Min mesh requirements for Character setup.

A2F does not work with combined meshes – It requires that a Head Mesh must be broken down into its respective components. Head Mesh, Left Eye, Right Eye, Lower Teeth and Tongue must be individual meshes and cannot contain sub meshes or be compounded in anyway. Please see Online Documentation and NVOD A2F Tutorial videos for further guidance.

Audio2Face and Audio2Emotion Overview.

Character Setup Overview.

3 Likes

Amazing work! This will surely boost creativity. I’m just wondering if A2F 2022.1 now just takes in USD formats? The older versions used to take obj. Did that change in the newer version?

Thank you

Hi @Ray_J You can still import obj to audio2face - they will be converted to USD upon import.

Hi there!

I am struggling to find the latest Debra file(with hair, eyebrows, eyes, teeth and tongue) as shown in the new tutorial. Will you add that to the localhost soon? Also I have been using customised character from CC4 for testing Audio2Face. Their A2F export has not been updated yet , anyone knows a workaround where I can just export the head mesh including eyes,teeth, tongue and hair ?

Thanks a lot,
yif

2 Likes

@yifan.hu Hi, You’ll find it on Localhost.

omniverse://localhost/NVIDIA/Assets/Audio2Face/Samples/reallusion/Debra_A2F_CC_GameBase/Debra_fullface.usd