@siyuen Hi there! Thanks for the reply.
What I’d like to be able to do is export motion data that can then be used on a game rig. The best format for that, at least for most modern facial rigs for games, is float channels that represent different FACS shapes. Mesh caches are not particularly useful as a data format for games (at least at runtime), for a few reasons…
Vertex motion is saving both motion and shape together, which makes it very mesh specific: You can’t port it between characters. One of the nice things about FACS is that it’s using generic descriptors like “the jaw is 48% open”, “the middle of the left eyebrow is 20% raised”, etc. Those kind of descriptors separate out the description of the motion from the shape of the face: It leaves the specifics of what to do with those descriptors, to each individual facial rig. That’s a lot more useful for game teams.
Vertex motion is difficult to edit. Say I really like the lip sync from a capture, but want to make the character smile a bit more, or raise their eyebrows more; that’s very hard to do with a mesh cache. With blendshapes, bones, FACS rigs, or really anything driven by float channels, you have controls in place that an animator can use, so it’s a lot easier to make those adjustments. This includes runtime blending of animations e.g. having the lips run on a separate animation layer to the eyes, so you can dynamically control eye look at direction, or the emotion in the brow, separate from what the person is saying.
Mesh caches are comparatively heavy, memory wise, so aren’t really as suitable as a runtime animation file format.
I appreciate your point about people having different face rigs, but once the data is in a float channel format, it’s fairly straightforward for a technical animator at a studio to write a script to convert the data if you have access to both the source and target rig. You set each channel on the source rig, one at a time, to 100%, then animate the target rig to match that shape, and then save out a mapping of the values for that shape (e.g. 100% on this channel = this combination of channel values on the other rig).
Once you have that mapping you can process all future captures very quickly. This approach isn’t possible with a mesh cache though.
If A2F supports blendshapes, I think allowing users to export the blendshape channel data, would add a huge amount of value. It would also be very helpful to include the basic blendshape rig with no motion on it, as something like an .fbx file, so that users have the reference for what each individual channel is doing, so they can match the shapes and create a mapping for the data. It wouldn’t need any complex controls or anything like that; just the float channels for each of the shapes.
Hope that all makes sense? Thanks again for taking the time to follow up.