CC character error with audo2face

A lot of progress here :)

Next thing are the teeth.

The above video workflow also works for CC characters with Reallusion Actorcore motions. Just as esusantolim said in his video earlier.

So we see the A2F lipsync + IDLE motion on female body:

But how can I loop the IDLE movement, how can I control the timing of these?
Do I have to somehow now export this ( okay I wait for the teeth workaround screen sharing session so I belive the teeth shall work ok soon ) to Machinima? I know how to loop there the body movements.
Or can I somehow bring in the sequencer to here at A2F app?
It is not in the extensions…

Audio2Face App doesn’t have much control on usdSkel animation other than setting the loop on the timeslider.

The current suggested workflow to export the cache from Audio2Face then assemble the cache with the body animation it in Machinima/Create and control the body animation using Sequencer or AnimGraph…

1 Like

@pekka.varis
Here is a video on how you can drive the lower teeth and tongue with the current A2F

Hope this helps

Thank you!
I study the lower teeth workflow.

But about the next step, exporting the cache to Machinima, I cannot find a thing in the documentation. Only thing I found was this:

But that “data export” dialog box is gone now with the latest version. So do I use this function, with the final working Natasha Fullbody, where your tutorial ends ?

I have to setup the teeth & tongue in A2F with the fullbody, that I understand. Maybe I then export this without a body movement? And then assemble the cache with the body animation it in Machinima, right?

Ok, I tried to follow the teeth & tongue tutorial you have kindly made. See, no moving teeth, what I have done wrong?


I have not enabled the UsdSkel Blendshapes in preferencies, since disabling that was a fix earlier in this topic.

Oh, customer do not want to use Machinima at all. He wish to stay in A2F and that is fine IF I can somehow make the IDLE movement to loop like 20 times and make it a bit slower. I just wrote a topic about this to Reallusion iClone support… Maybe the actorcore body motion can be edited in iClone :P

As I wait for your answer, I made a new video to show you better where I fail with driving the teeth & tongue. Please watch this:

The error message I get down at 1:09 after pressing APPLY is this:

So …Teeth_Dummy-primvars:normals:indices UsdRelationship not supported in GetPrimArrayAttr

Hm… interesting.
Can you share your omnigrpah settings snapshot?
It’s under menu : OmniGraph >> Settings

Mine looks like this

Regarding your question about having idle animation looping at slower speed,
You can try add the motion file as a sublayer,
then on when you select the later, you can try experiment increasing or reducing the timecodes per second of that sub layer.
then you set your timeline to play in loop.
This would only work with single animation clip though.

Here is video showing the layer workflow

Thank you for the stages setup video. I learn it.
About the teeth issue, I have those settings as default, like yours:

Hey, the layers / timecodes per second solution was a great!
Now I need to render out stuff with looping motion & talk. See what happens:

So it works ok from the Tool Bar large PLAY button.
But then when I render the movie out, there is no looping of the body movement and the talk is missing.
What would be the solution for this?

As I wait for your answer for the render out -question above, I studied how to use iClone7 “reach target” feature to move the news anchors hands to the table:

I exported the character to Omniverse from iClone and made the A2F worflow from beginning. It works OK!

@weienchen
@esusantolim

I really need help with the missing teeth & tongue workflow and with the rendering of mp4 clip. As you can see from the above posts… Please help me?

About the rendering of a mp4 with body movements and talking character.
I tried it this way: I selected real-time rendering, I activated the movie capture for a mp4 file. I pressed “Capture Sequence” and then I pressed the play button at A2F Audio Player.

And it rendered body movement and talk! Nice!

Only thing mising was a sound. Looks like we have to add and SYNC the sound with some free video tool like obs of blenders video editor.

But how does this work with Path Traced?
I need to render also Path Traced versions of this…
I now can make a decent enough looking Real-Time version of this character for production, thanks to Chris Scott my partner.

sorry @pekka.varis this post is kinda long, which mp4 are you referring to?

I know weienchen it has been a long learning curve :)

I do not refer to any mp4 file at the above post starting “bout the rendering of a mp4 with body movements and talking character…”

I just mean any mp4.
Like how do you render movie or img seq from A2F as path traced, with the talking character?

I tried this by starting the real-time render ( imq seq, not mp4 ) in A2F, then when rendering is running I pressed the play button for wav audio player, it worked. BUT the sync is drifting OFF pretty soon.

I had to correct this in adobe premiere by speeding the imq seq by 96.81 % length:

So exporting needs some documentation.

This topic is truly kinda long…
Should I now just kill this one and make a new clean topics of all remaining issues, one by one?