Simulation changes made from Extension are preserved in Stage

Hello,

I am experimenting with prismatic joint for linear movement of a cube. I am creating an extension to control the simulation. The extension has a button when clicked will play/activate the simulation. Here is what I am doing.

In stage, I added 2 cubes and added prismatic joint between these 2 cubes. The prismatic joint is configured with Linear Drive such that on Play, one of the cube moves in x-axis.

As I wanted to control the simulation on a button click. I removed the Linear Drive from properties window of the prismatic joint manually and add it via code (on button click).

Fig: Before invoking below code (prismatic joint)

jointPath = stage.GetPrimAtPath("/PrismaticJoint")
linearDriveAPI = UsdPhysics.DriveAPI.Apply(jointPath, "linear")
linearDriveAPI.CreateTypeAttr("force")
linearDriveAPI.CreateTargetPositionAttr(40.0)
linearDriveAPI.CreateDampingAttr(0.0)
linearDriveAPI.CreateStiffnessAttr(100.0)

Fig: After executing the code

This will add Linear Drive to the prismatic joint and will activate simulation.

This approach works first time only. For next button click onwards (or if I stop and play again), it will not do the simulation.

One thing I observed was, if I remove Linear Drive manually and add it again through the editor, the values 40, 100, etc. are preserved there. This shouldn’t be the case because adding a new Linear Drive from IDE should show it’s default values. I think, these values are stored somewhere due to which the simulation is not working again on button click.

So, my question here is

  1. What is the correct way to start a simulation on button click?
  2. Why the values are preserved for Linear Drive, even if I remove and add it manually through IDE?
  3. What could be the possible reason why simulation is not working from second time onwards?

Thank you for the patience to read and understand this.

Any help is appreciated.
Thanks in advance.

Hi there. Is this error occurring in Isaac Sim? You need to post this in the ISAAC SIM page. Latest Omniverse/Isaac Sim topics - NVIDIA Developer Forums

@Richard3D I am using USD Composer.

OK, thanks. I just had to ask first. Let me see if I can get some advanced help from one of our physics experts.

Hi,
so for your questions:

  1. You should call the timeline play → omni.timeline.get_timeline_interface().play()
  2. The values are preserved, this is correct, by removing the API the attributes are not removed, what is removed is the prepended API, not the attributes this is the default USD behavior.
  3. Not sure I understand fully how you control the simulation, you can call omni.timeline.get_timeline_interface().play() then omni.timeline.get_timeline_interface().stop(), you should be able to call this the same way as its called from the timeline control on the left sidebar. How do you need to step the simulation?

Regards,
Ales

Thanks so much @AlesBorovicka

@Richard3D @AlesBorovicka What I am trying to achieve is slightly different here.

Below is a quick example to explaining the concept.

The Robot A picks the package from Conveyor A and places it on Conveyor B. The Robot B picks that package and places it in a pallet.

I added simulation to conveyor belt and robotic arm.When I press Play button, the entire simulation starts working which is not what I need.

Instead I want know that a package has received at the end of the Conveyor A so that I can trigger the simulation of Robot A which will pick the package from Conveyor A and places it on Conveyor B. In similar fashion, I should be able to trigger simulation of Robot B when the package reaches end of Conveyor B.

In short, I am looking for technique to trigger the simulation from my code instead of the all the simulations to get playback automatically on pressing Play button.

How can I achieve that? How can I execute simulation in sequence? What is the right approach?

Looking forward for your reply. Thanks in advance.

There are a couple of ways to do this.

The first would be to use the built in Sequencer if you are using USD Composer 2023.2.5. Sequencer can stagger animations like “clips” to play them back at the right time.

The second way of doing this in through omnigraph / actiongraph nodes and logic. You could have “zones” where it detects when the box enters this zone and that could trigger the animation for the next thing.

However, the easiest and most basic way is just to manually animate the keyframes to be at the correct time on the master timeline. So for example, I am assuming you exported some of these animated assets from another application. Let’s say Robot A. It could have a 500 frame animation cycle that starts at frame 0 and goes to frame 500. The problem is that everything else starts at frame 0 as well. The easy way to fix this is to make a BIG timeline, say 4000 frames long, and then for each asset you need to literally drag the animated keyframes further along in the timeframe, to stagger everything and make a sequence. Does that make sense?

So Conveyor A runs 0 to 500, then Robot A runs 500 to 1000, then so on and so on.

Using purely physics this should be possible using sleeping and triggers.

So you can setup the bodies to start as asleep (there is an attribute on a rigid body).
Then you can have physics triggers (see triggers demo), in this trigger you can wake up the bodies of the Franka arm and they should start moving. You can put them to sleep once you are done with your work.

To control sleeping on the articulation (the prim where articulation root is) you can use this snippet code:

from pxr import PhysicsSchemaTools
from omni.physx import get_physx_simulation_interface

        art_encoded = PhysicsSchemaTools.sdfPathToInt(articulationRootPrimPath)

        is_sleeping = get_physx_simulation_interface().is_sleeping(stage_id, art_encoded)
        self.assertTrue(not is_sleeping)

        get_physx_simulation_interface().put_to_sleep(stage_id, art_encoded)
        self.step()
        is_sleeping = get_physx_simulation_interface().is_sleeping(stage_id, art_encoded)
        self.assertTrue(is_sleeping)
        
        get_physx_simulation_interface().wake_up(stage_id, art_encoded)
        self.step()
        is_sleeping = get_physx_simulation_interface().is_sleeping(stage_id, art_encoded)

@Richard3D @AlesBorovicka Thank you for the detailed explanation. Let me revisit those topics over the weekend and see how can I leverage it.

I will keep you posted.

Thank you.

Great thanks

@Richard3D @AlesBorovicka I had seen the Franka examples in Physics Demo Scenes.

I am familiar with basic rigid body joint simulations.

I need some guidance on how to setup the simulation for robot. As per the example and code snippet:

  • The robot movements are defined by state (eg: advance, go_to_pickup, lift, etc.) that triggers the simulation of the robotic arm.
  • The selected state will execute the code that simulates the robotic movement

I have few questions related to that:

  1. Is simulation through code is the only approach?
  2. Is there any editor available that helps authoring joint movement of robotic arm? For example, if I do movement between 2 joints that will auto-generate/preserve/record code snippet which I can use as baseline instead of writing all the simulation code from ground up?
  3. I am using USD Composer. Is that the right IDE for this? If not, can I do the simulation in another IDE (Issas Sim, or similar) and import that USD to Composer?

In simple terms, if I have a robot say KUKA robot. How should I get started with author/create simulation of the robot? What is the recommended approach?

Thanks in advance.

@AlesBorovicka @Richard3D I did some research further. This is what I understood about robotic simulation. Please feel free to correct me.

In robotic simulation, there will be a URDF file associated with a robot that has all about joint movement information. This URDF file will be used for robotic simulations. Issac Sim has capability for import/export URDF as per documentation.

The URDF extension is not listed in Composer. So I assume we wont be able to do that in Composer and Issac Sim us the only way to do robotic simulation.

Working with robotics assets is much more simpler in IsaacSim as most relevant extensions are already available.

You could import the URDF file in IsaacSim and then save it as USD file and open in Composer, that would obviously work.

There are multiple ways how to interact with physics, the Franka demo is using tensorAPI that is a performant API to talk directly to PhysX. You could also use say omni graph to change the values on attributes to get desired behavior like changing velocities or target poses for joint drives.

Again its easier to do this in IsaacSim as it already does bring some APIs that you can leverage.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.