A2F for Rigid Actors

Could you consider on your roadmap adding support for non-deforming workflows?

I’m targeting Crowd Sims and large crowd rendering - so, my actors do not deform, as these deformations cannot be instanced on the GPU, as they would severely limit the number of actors that can be rendered within an acceptable timeframe and available VRAM.

So, I’ve built a Rigid Actor system in Houdini - where the Actor still articulates based on the Rig transform, but does not deform.

Imagine various hard surface pieces assembled to resemble a biped that are all attached to a rigid rig (no mesh skinning) via parenting to the rig transform.

So essentially, I dont need A2F to deform a skinned mesh, I just need the Mesh Fitting point(s) location(s) across the transcoded audio phrase ( or something similar to that ).

I realize this may be an edge case for your project - but it would allow me set dress an entire stadium of cheering rigid actors very easily.

Hmm, I guess I could use the existing workflow that deforms a skinned mesh, and then manually select some points on the deformed mesh animation to extract transforms from.

Yeah, that would work just as well, without the A2F team having to create a specially workflow for rigid actors.

edit: or if I could read the Mesh Fitting points, that’d be fine too I suppose

Hi Daryl,

When you say rigid actor, you mean they are just cut up in pieces and move rigidly? what kind of characters is that? Robot or organic character?

Technically you can constraint those rigid joint / objects on top of the deform surface . Your point extraction / reading method also works.

you mean they are just cut up in pieces and move rigid (that articulate)

Yes exactly! Cut up pieces parented to the rigid rig.

what kind of characters is that?

Well, any non-organic can be rigged using this system - not just robots, but anything that can articulate but not deform.

I’m very proud of it, I’ve invested over a year in R&D getting to this point. So, I was very happy to see A2F, because, I can leverage it to add some facial movements.

A2F allows me to:

  • Simulate the speech of several phrases
  • Bake those results out to USD cache
  • Bring that speech library into Houdini and feed it to a randomize system
  • Background characters in the scene now have a matched movement/speech based on some ruleset.

Any tips/suggestions you can offer will be greatly appreciated.

1 Like