In Audio2Face 2022.1.1 any way to export the seperated eye,tongue and jaw?

a lot of video tutorial to show how to export face mesh with blendshape ,or cache. But nothing about export eye,tongue and jaw. these animation data are driven by audio and modify on xform . It should be some way to export them ,so it can worked with the face

Hello @adun_mk2 Did you have a chance to check out the tutorial from this post?
LIVESTREAM: Getting Started: Audio2Emotion (Wed. August 31 3PM PDT / 6PM EDT) Let me know if this helps!

Dear WendyGram:
thank you for tutorial and let me know something new ,which is transfering face animation to full body mesh in audio2face.

but,my question is different from the workflow in tutorial.
the “exporting” in my question means bring the animated data from audio2face to dcc. here is my workflow:

1.bring face mesh into audio2face,include seperated eyes ,tongue, jaw( from maya or blender).
2.do the character setup ,skin mesh fitting ,post wrap ,etc.
3.import the audio and do the emotion tweak for a smooth talking animation.
4.import the face mesh with blendshape and use BlendShape Conversion to getting animation keyframe with blendshape.
5.export the blendshape animation to json .and use python script to read and set keys on blendshape in maya.
6.try to export the eyes, tongue ,jaw xform keyframe to maya(or other dcc).

no idea about how to do step 6 above. the animation on those parts are values of transform on its xform node.
how to export those value?

most video on youtube about step 6 are redo the animation job in its dcc after transfered blendshape face.
maybe those video are out of date and in audio2face 2022.1.1 ,there are smooth and high quality eyes, tongue ,jaw animation over there. Just export and use them in dcc is a better idea , i think

Hey @adun_mk2, you should be able to export your character with “Export as USD Cache”
image
using the “Rigid Xform as Keys” option when prompted.
image

thank you wtelford1,keyframe exported

and here i have a bug:

2022-09-23 02:48:05 [189,366ms] [Error] [omni.audio2face.common.scripts.utils] Exception when async '<function Exporter._run_export_mesh_usd_async at 0x000001DCD06603A8>'
2022-09-23 02:48:05 [189,366ms] [Error] [omni.audio2face.common.scripts.utils] 
	Error in 'pxrInternal_v0_20__pxrReserved__::UsdStage::_SetValueImpl' at line 6043 in file C:\b\w\ca6c508eae419cf8\USD\pxr\usd\usd\stage.cpp : 'Type mismatch for </World/transfer_character.xformOp:transform>: expected 'GfMatrix4d', got 'void''
2022-09-23 02:48:05 [189,371ms] [Error] [omni.audio2face.common.scripts.utils] Traceback (most recent call last):
  File "d:\nvidia_ov\pkg\audio2face-2022.1.1\exts\omni.audio2face.common\omni\audio2face\common\scripts\utils.py", line 31, in wrapper
    return await func(*args, **kwargs)
  File "d:\nvidia_ov\pkg\audio2face-2022.1.1\exts\omni.audio2face.exporter\omni\audio2face\exporter\scripts\exporter.py", line 1027, in _run_export_mesh_usd_async
    copy_op.GetAttr().Set(op.Get())
pxr.Tf.ErrorException: 
	Error in 'pxrInternal_v0_20__pxrReserved__::UsdStage::_SetValueImpl' at line 6043 in file C:\b\w\ca6c508eae419cf8\USD\pxr\usd\usd\stage.cpp : 'Type mismatch for </World/transfer_character.xformOp:transform>: expected 'GfMatrix4d', got 'void''

it seems that the /world/transfer_character node did not have value when the transform(GfMatrix4d) value remain default, i try to type 1.0 into the first matrix value and passed this error

My workflow is similar to yours, and I have these questions too.

Hi @adun_mk2,
Thank you for trying this and also suggesting the solution, we filed this as a bug and will fix this in the future release!

Thanks for the answer, but do you know how the animation information from rigid xform can be played in Blender?

Let’s continue this conversation on the other thread as it’s related to Blender

Hi,
can you please elaborate more on this bug and on the solution proposed by @adun_mk2 ? I have the same exact error and I don’t understand what I need to do in order to overcome this step.

Regards,
Willy

Hi there, can you tell us the steps you take that lead to this error?

Sure! I’ve done this steps in order.

  1. Opened a new empty scene based on the “Sunlight” Template
  2. Imported my mesh made in Maya and set it to 0,0,0
  3. Started the Character Transfer Process, and followed it up to the end (i.e. the tongue mesh post wrap step)
  4. Selected the “mark” mesh that was automatically created by the Character Transfer Process and added a A2F Pipeline via the Audio2Face Tool Tab
  5. In the “ATTACHED PRIMS” box within the Audio2Face Tool tab, I’ve selected the tongue mesh from the template group and the eyes and lower denture Xforms from the “character_transfer” group (i.e. those made automatically by the character transfer process)
  6. Played the sample audio to check that everything went correctly
  7. Substituted the sample audio with an audio file of my choice and checked the result of playing it
  8. Hidden the templates by clicking on the little eye icon in the Stage tab for the whole a2ftemplate scope
  9. In the A2F Data Conversion Tab, within the “GEOMETRY CACHE” box, I’ve selected visible meshes and clicked on “Export as USD Cache”, pressed ok on the info box, and selected “RIGID XFORM AS KEYS” as export option.

My hypothesis is that the transfer_character xform is made by the process with the identity matrix as default value for the transform, but that value isn’t really set unless you change it at least once. In fact, I got past the error by changing the value of one of the elements of the matrix and then changing it back to its previous value.

Hope it helps

1 Like

I tried this on a geometry exported from Maya and it worked without an error. Are you able to provide your original model for further investigation?