I was wondering if you can import a MetaHuman face mesh into Audio2Face to achieve better results. I see on the Audio2Face product page there are other character face meshes imported into Audio2Face from what looks like iClone or Character Creator so I was wondering if the same thing would work for MetaHumans? I’m asking because the results I’m getting from exporting the blend solve and then importing into Unreal Engine 5 is not really usable.
Hi @idancerecords, probably you can try this route: export metahuman into maya, then convert to .usd to import into audio2face.
Could you give us more detail which part of blendshape solve + import is not usable for you? We can see if there is room for improvment.
There is definitely room for improvement. The blendshape solve basically destroys all the great AI lipsync that Audio2Face generates. After I exported from Audio2Face and imported into Unreal the entire animation was way off… the mouth never even closes. It’s just a mess of unusable animation. I basically had to redo the entire animation by hand … creating blend shape poses for all the phoneme shapes and then re-animating the entire thing. So yeah, there is definitely room for improvement. You need to make it so that the animation that Audio2Face generates is exactly the same when it’s applied to a custom character… and in this case, to the Epic Games, Metahuman characters. If I had to tweak the animation a tiny bit here and there I would be okay with that, but the results I got were not usable at all and I ended up having to animate the entire thing by hand… which defeats the whole point of using this software.
I actually think the whole approach to this software is off. I don’t want to start off in Audio2face using some generic male avatar head… what if my character is female? It would be way better to be able to import your custom character… whatever that is… Character Creator, iClone, MetaHuman, DazStudio, Poser, custom mesh, etc. and then begin the animation process with the character that will ultimately be using the animation so you don’t waste time tweaking the animation on some random generic proxy character. You would then know exactly what you would be getting. The final resulting facial animation and lipsync should be exactly what you see inside Audio2Face.
Hi @idancerecords ,
We are aware of some of the bad mappings and will be addressing them in future updates. In the meantime, the blendshape solve does have some attributes you can tune to improve this.
To answer your other question about taking MH face to Audio2Face, yes you can definitely do that and you would not have this issue doing a character transfer to that face instead, and also you get better quality compare to blendshape. Depending on your need, this may work also.
Can you point me to some docs or instructions/tutorial on how to take MH face into Audio2Face to do a character transfer to that face?
+1 on this. I used Audio2Face 2021.3.3. export to MetaHumans in Unreal 4 with great results and eagerly awaited the 2022.1 release. But when I tried to export data from the new “Full Face” model to MetaHumans in Unreal 5, the lip sync completely failed.
After so many pushed back release dates due to quality assurance and the GREAT new emotion features, this is kind of frustrating. Can I kindly ask the awesome NVIDIA engineers on this project to look into this and post a fix?
Rather than waiting for another full update, maybe posting an just updated blueprint for export to MetaHumans/UE5 as a separate file would be faster and therefore much appreciated.
Thank you very much in advance for considering this.
@siyuen : can you please elaborate about the blend shape attribute tuning?