Reallusion’s Character Creator 4 comes with an expanded list of blendshapes in a way that doesn’t trivially map to the 52 ARKit blendshapes. What would be the recommended flow for using these characters for avatar streaming? Thought about recreating ARKit blendshapes in Audio2Face, but decided there should be an easier way and I am missing something. Should I be adding a node into the solved mark blendshape file and map the 52 numbers to the 70 something blendshapes in my character somehow? Would appreciate some pointers.
Or will there be a similar workflow video to the Camilla videos for custom characters from Reallusion?
Thank you for all the hard work! The new updates are so exciting!
Hello and welcome to the forums @Ofoz
This is possible and we have Siggraph talk which will happen in about a month, that shows the workflow.
In the meantime we’re working with Reallusion team to get this more streamlined.
Thanks so much for the response!
For the Siggraph talk, should I be registering to get access to it?
And in the meantime while waiting, any hints on the approach I can take? (Is it a mapping between the blendshapes or a way to export from CC4 with the Apple ARKit blendshapes only?)
You don’t need to register for Siggraph for this.
The solution is to create a script and match the Reallusion blendShape target names to Audio2Face blendShape target names. It needs a little bit of digging and finding out the missing (or added) blendShapes, merging the splitted ones and renaming them. Which is possible if one has enough knowledge of blendShapes in USD and Omniverse python scripting.
Thanks again. I made some progress on this by switching back to Standard Profile in CC4 (which has a much better 1:1 mapping with the Apple blendshapes) and then editing the usd as usda inside Composer to reorder/reduce the blendshapes, so that the incoming array is mapped to a same size array. This moved things in the right direction, but still if I use Mark_solved in Audio2face and use a male reallusion character in avatarstream, the lip syncs are fairly off.
I wanted to double check my understanding for the workflow with ARKit blendshapes: in audio2face side, we do not need to solve and obtain the blendshape weights with the exact same character we will use on the avatarstream side, right? When I try to stream from mark_solved on audio2face side to Claire on avatarstream side, it still maps pretty well, so that was my understanding. should this be true for reallusion characters as well? (And I am trying to get base_body working first. I understand I will have to deal with tongue, teeth meshes later still)
In order to obtains best matching blendShape weights, the solver needs driver and driven meshes to be the same.
Did the Siggraph talk mentioned above happen? Would appreciate any links available. Also any updates on Reallusion blendshapes working more smoothly with avatar streaming (using the blendshapes available in standard profile)?
I’ll update here if/when Siggraph talks are available to public.
But in the meantime, you can solve the blendShapes with arbitrary number of targets in Audio2Face, using
A2F Data Conversion tab ->BlendShape Conversion. Just load the new CC 4 character with 70 blendShape targets as