Hi there,
I am using the A2F to make avatar talking within UE. The pipeline is that I am sending audio to A2F, and then capture the blendshape value in real time, and sending it to a thread in UE. I found one of the blendshape weights is incorrect, specifically, the browInnerUp value, which is too big at all time. I attach a picture of the blendshapes I captured.
Nope. Actually, the facial emotion is correct for the model in the viewport. The error I mentioned is ONLY applied for the blendshape value read from API.
This seems to be an issue with the current result of the A2F solve that out neutral (silent audio, emotion at neutral 1) that Mark produce a slight fear looking brow delta vs the neutral model.
Blendshape solve is comparing the model againts the default model, so the delta of default model to the neutral emotion is producing inner brow up blendshape weight.
Check out this video where I show how you can use a script to update the default model of the blendshape mesh so that the solve will compare against a matching neutral mesh.
Script used in the video is here:
import omni.graph.core as og
bs_mesh_path = "/World/male_bs_arkit/neutral" # ADJUST
node = og.ObjectLookup.node("/World/audio2face/CoreFullface") # ADJUST
points = og.Controller(node.get_attribute("outputs:points_skin")).get()
bs_mesh_prim = og.ObjectLookup.prim(bs_mesh_path).GetAttribute("points").Set(points)
Hi @esusantolim ,
Thanks for the solution, it works!
Following your solution, the browInnerUp value would be changed from 0.1 to 0.13, compared to the old value which is at around 0.7, this is acceptable.
And I am wondering whether this is the final solution in the future release. As you see, it is not quite straight forward for most of the users.
Terry
This is an issue with any head model, not just Mark’s. There are plans to reduce this in the future, but for now, for a perfect match, the suggested solution is the way to go.