Half and Half: Workflow Discussion

I’ve been struggling how to craft this so that it makes sense, so I’m just going to lay it all out here and hope that it makes sense.

I’ll describe this in 2 parts - Part 1: TimeSampled XForm Animation that switches to Part 2: Ragdoll Physics with Joint Breaks

So, this is a robot gladiator movie I’ve been doing R&D on for about 3 years now. The previous 1.5 years where in Houdini land - with the rest in Omniverse land.

Let me start by saying this workflow goal is NOT USDSkel and a traditional skinned character setup. Its the exact opposite. The goal of the workflow is Physx Joint Breaks - so this the end goal, and I worked backwards from there in creating my workflow and pipeline.

Part 1
So, then how does it “work”? Well, think of USDSkel, but its replaced with a hierarchy of XForms, which represent the Joints of a USDSkel setup. So, any Mesh that is a child of the XForm will articulate - perfect. So, I build my ‘robot’ up, piece by piece, by making the Mesh piece a child of its respective XForm.

So, how do you “animate” a hierarchy of XForms? Very easily - because I have a workflow in my pipeline that converts a USDSkel and its Joints into a XForm Hierarchy with TimeSampled data!

So, I can take any USDSkel data source (FBX Animations on disk, Pose Tracker results, Houdini KineFX, etc.) and convert them into a XForm hierarchy with the Animation baked in as TimeSampled data.

This XForm Hierarchy concept is very, very powerful (for my needs). Since there is no mesh skinning, I have full control over the Actors look, using Variants, Realtime secondary animation, etc. Additionally, with the recent addition of ‘USD TimeSample to Curves’ feature, I can create a library of ‘XForm Animations’ that can be dragged onto the Sequencer to ’ marionette’ my Actor or layered in via USD Layers.

So, my robot is now assembled, it is being animated by a Sequencer Track and AssetClip - which is feeding curve data to the XForm(s) - animating the Actor.

Part 2
Part 1 is static - in that the animation is baked into an AssetClip that articulates the Actor and the Actors position in the scene is also a separate AssetClip.

Part 2 is dynamic - in that when a Physx collision is detected - the entire Sequencer Track for the XForm needs to be hidden (effectively turning the AssetClip off - stopping the curve data from being sent to the XForm) - [I already confirmed that USD visibility affects the Sequencer Track/AssetClip data transmission].

So, how I do the Joint Breaks? Well, the Mesh(es) under the XForm are also a Physx Rigid Body, but its created with ‘Rigid Body Enabled’ turned off and ‘Starts as Asleep’ turned on - so no Physx simulation result is competing against the Sequencer AssetClip.

Using the new OmniGraph nodes for Physx Scene Querying - I can use a hidden Collider [USD Visibility doesnt affect Physx] to detect ‘collision’ and toggle between the baked animation and Phsyx simulation - so, for each Mesh that had a collision, toggle the sequencer track off for the XForm and enable/wakeup the RigidBody. When Phsyx takes over control of the Actor, it becomes a rag doll and reacts to Physx stimulants in the scene.

Every collision from the hidden collider is a joint break. Think of a laser gatling gun hitting an army of charging robots. Each visible laser beam has a hidden collider.

So, I know that’s alot of information to digest - but this is what I’ve been working on.

If a single Actor could easily have 50 XForms, that’s alot of data when you scale that up to an Army of robots.

So, I’m looking for suggestions, best practices, guidance, and possible Kit SDK improvements that could assist me with this movie project.

I’ve been looking at Warp Kernels, OmniGraph bundles and the like as possible targets to help me with scaling this setup, up to the scale I’d need.

Hello @daryl.dunlap.ohio! First, I’d like to suggest that you join the Omniverse Community Discord at discord.gg/nvidiaomniverse. We have 1,000’s of industry experts and developers that can have a back and forth discussion with you on the best way to set up your scene. I suggest that because every project you do will likely have different workflows depending on what the requirements are.

I’ve shared your post with our Animation Team and our Physics Team to add their suggestions and give you further assistance!!

Another possibility for the ragdoll initialization would be to listen to contact reports (see physics demos - contact report). You could have the bodies set as kinematic, hence they would move with the animation.
Once the contact report is detected for the kinematic body with something else you would enable the bodies.
For contact reporting between kinematic bodies, there is a bool to enable on PhysicsScene - report kinematic vs kinematic pairs.
Would that work for your usecase?

Ahh, Kinematic to Kinematic body contact reporting would indeed help out a lot in this workflow.

I’ve taken off a few days next week to get back around to working on this - my plan to is share a small sample file that uses all the techniques discussed, so that Devs can interact with the data and workflows involved.