Reallusion characters for avatar streaming

Reallusion’s Character Creator 4 comes with an expanded list of blendshapes in a way that doesn’t trivially map to the 52 ARKit blendshapes. What would be the recommended flow for using these characters for avatar streaming? Thought about recreating ARKit blendshapes in Audio2Face, but decided there should be an easier way and I am missing something. Should I be adding a node into the solved mark blendshape file and map the 52 numbers to the 70 something blendshapes in my character somehow? Would appreciate some pointers.

Or will there be a similar workflow video to the Camilla videos for custom characters from Reallusion?

Thank you for all the hard work! The new updates are so exciting!

Hello and welcome to the forums @Ofoz

This is possible and we have Siggraph talk which will happen in about a month, that shows the workflow.

In the meantime we’re working with Reallusion team to get this more streamlined.

Thanks so much for the response!

For the Siggraph talk, should I be registering to get access to it?

And in the meantime while waiting, any hints on the approach I can take? (Is it a mapping between the blendshapes or a way to export from CC4 with the Apple ARKit blendshapes only?)

You don’t need to register for Siggraph for this.

The solution is to create a script and match the Reallusion blendShape target names to Audio2Face blendShape target names. It needs a little bit of digging and finding out the missing (or added) blendShapes, merging the splitted ones and renaming them. Which is possible if one has enough knowledge of blendShapes in USD and Omniverse python scripting.

Thanks again. I made some progress on this by switching back to Standard Profile in CC4 (which has a much better 1:1 mapping with the Apple blendshapes) and then editing the usd as usda inside Composer to reorder/reduce the blendshapes, so that the incoming array is mapped to a same size array. This moved things in the right direction, but still if I use Mark_solved in Audio2face and use a male reallusion character in avatarstream, the lip syncs are fairly off.

I wanted to double check my understanding for the workflow with ARKit blendshapes: in audio2face side, we do not need to solve and obtain the blendshape weights with the exact same character we will use on the avatarstream side, right? When I try to stream from mark_solved on audio2face side to Claire on avatarstream side, it still maps pretty well, so that was my understanding. should this be true for reallusion characters as well? (And I am trying to get base_body working first. I understand I will have to deal with tongue, teeth meshes later still)

In order to obtains best matching blendShape weights, the solver needs driver and driven meshes to be the same.

Hi!

Did the Siggraph talk mentioned above happen? Would appreciate any links available. Also any updates on Reallusion blendshapes working more smoothly with avatar streaming (using the blendshapes available in standard profile)?

Hi @Ofoz
I’ll update here if/when Siggraph talks are available to public.

But in the meantime, you can solve the blendShapes with arbitrary number of targets in Audio2Face, using A2F Data Conversion tab ->BlendShape Conversion. Just load the new CC 4 character with 70 blendShape targets as BlendShape Mesh.

Hello! Are there any updates on Siggraph available to public?

Sorry for the late reply. Below are the videos. The 2nd video is for ACE users. But we’re working on another version that can be used by non-ACE-users as well. Keep an eye on this thread.

Exporting Character in Reallusion CC + Audio2Face Workflow: Part 1 (youtube.com)
Exporting Character in Reallusion CC + Audio2Face Workflow: Part 2 (youtube.com)

1 Like

Hello! Thank you for the links to the videos. Videos are very useful.

How can we get the python script from the part1 video? I tried to recreate it by myself but video image blurred on the second part of the script.

if you are referring to the snippet from the Google doc, you can take a look at

1 Like

Thank you!

Thank you! This is super useful. Re the second video, do you mean it is currently not possible to hook up the output of the first video to the streaming scenario or you will be making the video on it later?

Since ACE is pretty exclusive right now , I am trying to understand when i can use Reallusion characters with the blendshape weights streaming outside of ACE. So is it doable with some self-serve hurdles or not supported right now?

We’re preparing a 2nd video for non-ACE users. It should be available in a few days. If not, please don’t hesitate to remind us.

@Ehsan.HM Just a Reminder

Thanks for the reminder. Here are the videos for non-ACE-users:
Fullbody ARKit Workflow with Audio2Face and Character Creator Part 1 - YouTube
Fullbody ARKit Workflow with Audio2Face and Character Creator Part 2 - YouTube

2 Likes

Thanks, @Ehsan.HM Really Appreciate

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.