This release of Audio2Face, version 2023.2.0 introduces significant Blend Shape quality improvements and expands on the export capabilities which enables all motion generated by the Audio2Face network to be exported. The Blend Shape Face Tuner gives the user more control and a stronger shape output with the ability to optimize the weight offsets driving your character. Improved Blink weights and transforms for the Eye and Jaw are also exportable. In addition It has never been easier to transfer a facial animation performance from Audio2Face to your own unique characters as the Mesh fitting process has been further simplified, providing the ability to Automatically generate Correspondence points.
Auto Detection provides the ability to Automatically generate 25 Correspondence points in the mesh fitting process.
Interactive Detection allows the user to set five initial guide points and auto detect the remaining 20.
Expanded Export options
Jaw and Eye joint transform data can be exported from the A2F data conversion tab.
Improved Blendshape Blink Export and Live Streaming
When exporting Blendshape Blinks - A2F can now autocorrect for variation in assets to ensure a value of 1 (for a full eye lid closure) is exported.
The blink export corresponds with the blink activation from the Audio2Face network.
Face Tuner Node
The “Tuner” node allows the modification of the blendshape weights using gain & offset, so you can amplify/reduce/offset some pose weights as the user desires.
Modifying the weight output allows the user to compensate for divergent ranges of motion on the receiving assets blendshapes setup.
Note: Please be aware of the following change in the Livelink UE plugin workflow and documentation - The name of the directory to copy/paste is no longer called “OmniverseLiveLink” Please copy and paste the “ACE” directory instead.
@omnilen , Thank you for the flag.
There’s a mistake on the tutorial.
skeleton visualization is now part of omni.anim.skelJoint extension that is loaded by default in A2F app now.
We will correct the tutorial.
Why are there 4 rotation values for joint exports in the json file? I would only expect 3 (x,y,z). I plan to import these values into Maya with a python script.
@stu9354 i don’t use A2F often, so i would defer to the mods/devs on this. i could, however, make an educated guess that you are seeing quaternion as opposed to euler rotation?
Can we import the Jaw and Eye joint transform data into Blender as well? If yes, how? Also, how can we export tongue mesh data? Before this, to import eye and jaw data, I had to export all USD files and Import them here. This feature will be extremely helpful. Thanks.
Hi,
I have the latest beta installed. When I try to export weights to a JSON file, the dialogue shows me 100% complete but then never finishes and the files are not written. Any ideas???
Thanks,
Jeff