Normals in blendshapes?

Any plans to add normals to the generated blendshapes? As I understand it, USD supports normals on blendshapes, but the ones generated by a2f does not seem to include any normal data so the deformed mesh does not look as good.

For example, the corners of the mouth can flex a lot (changing the angle), so lack of normals you start to see seams where the faces join.

No blendshape:
image

Open mouth blendshape:
image

My model is not super high quality, but computing normal adjustments would be nice to improve the render quality.

It looks like there are a couple of big triangles around the lips. Are you able to share your mesh?

You might be able to change the behavior of the normals in the USD file. You could save your stage as .usda and look at the file in a text editor. Make sure you have Facevarying for interpolation of your mesh normals .

normal3f[] normals = [(0, 0, 1), (0, 0, 1), ...] (
            interpolation = "faceVarying"
        )

Yes - I am going to try adjusting the markers to see if I can get a better result, but the edges I think are because the points are getting moved so much - what was the flat top the the lip is getting moved out and down, so its angle is changing. Because the blendshape moves the points but not the normals, the old normal is still in effect, so you get this “edge” visible (the normals are not pointing at the new angle so cannot smooth things over).

And yes, using faceVarying.

To be clear, are you suggesting to try adding (0,0,1) for each point in the blendshape for normals? Or was that just illustrative?

Hi. Here is another image from a different character generated by the same software. One corner of the mouth looks fine, the other you can see the edges (because the movements are more severe. However that is partly because it looks like it picked a point on the top edge of the lips to include with the bottom lip. But you can see the sharp edges again.

This model I can share - the GLB file is at https://ordinary-animator.web.app/characters/AvatarSample_A.vrm - a USDA version is in ord-animator.zip - Google Drive (this ZIP was created from the website https://ordinary-animator.web.app/ - I am experimenting with putting it all together).

It is the same software that created both characters, but I have not checked the mesh is the same for both (I believe it is)

I noticed your mesh border is on the lips. Often for spike issue that they are smoothed out with our mush deformer. but the deformer by default is pinning the border edges, meaning those spike won’t get smoothed.
Try find the Mush deformer after setting up the character transfer, and uncheck the “Pin Perimeter” and set the smooth level to higher value.

Thanks, I will give that a go. Will that affect normals? (The thrust of this post was the blendshapes generated do not adjust the normals for faces, just the vertex positions, meaning you can get ugly shadows where things are moved a fair bit - such as around the lips.)

Audio2Face doesn’t not affect the normals at all. It only deforms the points, very similar to blendShapes in Maya or morphs in 3dsMax and Blender.

Thanks Ehsan. I was doing a bit more reading (there is a recent discussion thread in AOUSD about normals in blend shapes). Just sharing a few points in case someone comes across this thread.

So it seems like a few tools do it internally but don’t export it, but there are rendering engines that will use the information if available. Support is certainly patchy across platforms, but apparently Unity (and possibly Unreal) do support normals per blend shape.

You’re right Alan. In fact, I noticed USD too, supports normals in blendShapes. This is something I learned recently. Thanks for bringing it up.

If I’m not mistaken, in Maya you can lock the normals which will look broken if blendShapes are deforming the mesh. But if the normals are unlocked (in Maya unlock then set to face) then the normals would deform as expected, meaning they automatically get updated when the mesh deforms to keeps the same look as the rest mesh.

I’ll do some more tests to see how A2F behaves with different blendShapes and will update this thread.

1 Like

Hi Alan, I confirm A2F does not export normalOffsets for blendShapes. But to be honest I don’t see a benefit in implementing it. Specially if you’d want to use the final product in another 3d software.

Please let me know if you think there’s a solid reason for this and I’ll convey your message to the team.

Thanks. Is it more important than other features? Dunno! Is there use for it? Yes.

It would improve the render quality of close ups in OV on face animations using blend shapes generated by audio 2 face. (They are also used in Unity and I believe Unreal Engine, but I have not confirmed if they get auto-computed when omitted from the blend shape.)

In OV, above I showed examples where the corner of the mouth is smooth when not using blend shapes. When the mouth is opened with blend shapes, the edge between faces becomes noticeable. I believe is because it is using the original normals even through the faces have changed angles.

Work around: I wonder if it is possible to write an extension that computes the difference in angle of faces and uses that to add normals to the blend shape. That would potentially be useful for any blend shape.

Alternative: Verify that OV adjusts normals during rendering if a blend shape without normal offsets is applied. (I believe this is what Unity and Unreal Engine will do.) That is, don’t add to blend shape - make it work well when its not present in blend shape.

In case of interest, here is the same mesh but using blend shapes from a different source. Note that the flat face problem does not seem to be as bad. So I don’t think “the mesh is too low resolution” is necessarily correct.