Hello everyone,
I’m currently exploring the features of Audio2Face in Omniverse, and I have a couple of questions. Firstly, I’d like to know if it’s possible to use a model other than Mark or Claire when using the audio player for streaming. If so, could you please guide me on the necessary configuration steps?
Additionally, I’m curious about the possibility of applying textures to the Mark model in Audio2Face. If anyone has successfully done this or has any tips to share, I would greatly appreciate any assistance.
Thank you in advance for your responses and help!
Yes, you can use a custom model for streaming to Unreal Engine for example. For this you need to generate ArKit blendShapes for your custom mesh and drive it using the solved version of your mesh. That said in the tests we’ve done you won’t gain noticeable improvements compared to streaming Mark or custom characters.
Applying textures in all Omniverse apps is similar. Please check YouTube for some tutorials. Here’s one I found: Five Things to Know About Materials in NVIDIA Omniverse (youtube.com)
Pardon me hijacking the thread a bit but I just wanted to confirm that I am understanding correctly that you’re saying there is no longer much benefit for driver and driven meshes being the same anymore in contrast to the post you made last year?
@Ataraksea
I think the “best match result” and “noticeable improvements” explained themselves, you could also aim to the best result but also benefit from the default performance.
That’s correct [most of the time]. A2F uses FloatArrayTuner node to dial output blendShape values from A2F before streaming. Therefore, we can decide how much for example jaw-open from Mark should be used on metahuman. By default, it should work for most standard human faces. Testing this on metahuman character was almost the same. But if the driver and driven are very different, e.g. a cartoon character, then we will probably benefit from solving and streaming the same character.