How can I change end-effectors reference frame?


I am trying to use ‘UR5e + 2f-140 + realsense D435i (hand-eye)’ in several tasks and now I make the UR5e pick and place the cube. The code saves depth and RGB images at "${HOME}/isaac_sim_image".

[Question 1]

When I try to use another USD file that has a different home position robot, the UR5e works with a different configuration even though it has the same target position. It looks like it is adjusted the transformation offset from the gripper. How can I fix this problem?

[Question 2]

How can I set the initial viewpoint right after running the code? I think the default viewpoint is the perspective and I want it but, my code shows with hand-eye viewpoint.

I share my codes and the problematic USD that mentioned Q1 has the name ‘ur5e_handeye_gripper_upright.usd’. (6.8 MB)

I rewrite a topic Feb 16.


I’ve asked your second question internally and will get back to you. As for your first question, from my preliminary look, I have some ideas but I need more information from you.

The problem: There is some offset between the upright UR10e end effector and where the target position is.
This is caused because there is a mismatch between the state that RmpFlow believes the robot to be in versus the state that the simulator believes the robot to be in. RmpFlow is the underlying algorithm being used in your pick and place controller to go to a target position.

RmpFlow is a general algorithm for outputting cspace joint targets to reach a taskspace position target. As such, it requires some configuration files to make it agree with the state of the simulator. You can visualize where RmpFlow thinks the robot is using the RmpFlow debugging features. I just put


in your RmpFlow controller class. It is easy to see that RmpFlow does not agree with the simulator about the state of the upright UR10e. The reason for this really depends on how you modified the original UR10e USD file to make this UR10e be upright.

The RmpFlow controller class that you have references a line

self.rmp_flow_config = mg.interface_config_loader.load_supported_motion_policy_config("UR5e", "RMPflow")
self.rmp_flow = mg.lula.motion_policies.RmpFlow(**self.rmp_flow_config)

What this is doing is loading RmpFlow config files that match the UR10e that ships with Isaac Sim. The config file that I believe must not be matching the modified USD file is the UR10e URDF. RmpFlow learns the kinematic structure of the robot from its URDF file. So if the USD was modified to redefine what a joint value of 0 means, RmpFlow will not have this information and will be trying to control the unmodified robot.

So, where did you get the modified USD file with the upright UR10e, and do you happen to have a corresponding URDF file?

1 Like

And as for your second question, try this:

viewport_handle = omni.kit.viewport_legacy.get_viewport_interface()
1 Like


By referred to the above link, I tried.

viewport = get_active_viewport()

I can get the path to input set_active_camera by viewport.camera_path.

However, the funny thing is once I set the perspective camera with the above method, I can watch the scene with the perspective camera without any set_active_camera.

Hi @psh9002 - I am sorry but can you please clarify these lines from your previous message ?
" However, the funny thing is once I set the perspective camera with the above method, I can watch the scene with the perspective camera without any set_active_camera "

Hi @rthaker.

As you know, the initial viewpoint was depth camera at first, but I want to set it with a perspective viewpoint. So I added

viewport = get_active_viewport()

then I can get the perspective viewpoint.

However, when I remove the above lines, the perspective viewpoint was still maintained rather than the depth camera viewpoint. Actually, I’m rather curious why the depth camera is set by the initial viewpoint when it was added. In more detail, I set the depth camera with (1280, 720) resolution which caused the problem but, when I set the resolution to (1920, 1080) I can watch the scene from the perspective view.

When I check about Q2 on another computer, I have to add the line for setting perspective viewpoint.

Thanks, @arudich.

I’m going to need a little more time to understand what you’ve said. I should follow an example about motion generation. Basically, I just used LulaKinematicsSolver rather than RMPflow.

As I understand about your answer, there is a pre-defined URDF file about the robots which contains UR5e and the RMPflow calculates the I.K with the URDF rather than the USD file. The problem with my approach is I only changed USD but not URDF. Is it right?

The UR5e USD file which I used was changed by myself starting with basic UR5e USD from Isaac Sim’s asset library but I had not thought of modifying the corresponding URDF file.

@psh9002 Yes, you are correct that RMPflow solves the kinematics from the robot URDF. That goes for any Lula algorithm, including the LulaKinematicsSolver. We do not currently have algorithms available that natively ingest USD files to solve kinematics. In the future, we are going to build support into Lula to use USD as the only source of ground truth, but I cannot currently estimate the timeline for that project.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.