End-effector offset of RMP/IK

I’ve tried follow_target_with_rmp.py and follow_target_with_ik.py and find that the converged end effector position has offset with target position. The offset is even more than 1cm, which is not acceptable for position control.

@BuckBabich

Hi, I think I had encountered something similar before - some parts of Nvidia’s code for Franka seem to be using the right finger as the end effector - is that the same issue?

I think it’ the same problem.

@newuhe Sorry for the slow response. Can you confirm that you’re seeing this for the Franka examples (as opposed to UR10 or DOFBot)?

I’m sure its follow_target_with_rmp.py under omni.isaac.franka. And the end effector position offset is computed using target position and position of “panda_rightfinger”.

@newuhe How are you determining the difference between the converged end effector position and the target position?
RmpFlow and Inverse Kinematics have algorithmically nothing to do with each other, so if they are both bringing the robot the same distance from your target, it sounds like they have a different idea of where the end effector frame is than you do. To check this, you can add the line

rmpflow.visualize_end_effector_position()

to see where RmpFlow thinks the end effector is. In ‘follow_target_with_rmpflow.py’, you will need to access the RmpFlow class from the RMPFlowController class.

rmpflow = rmpflow_controller.get_motion_policy()

See the links below for an example code snippet and an explanation of what debugging features RmpFlow has available.

RmpFlow Debugging Features Example Code
RmpFlow Debugging Features

If the visualized end effector target is being brought to the exact position of the target cube, then RmpFlow is working as it should, in that it beings the “panda_rightfinger” frame to the set target position.

I print the target position and the converged end effector position and find that there is offset. In the converged initial configuration, it shows

end position: (array([-0.02573992,  0.13310829,  0.72076553], dtype=float32), array([ 0.02917537,  0.9953671 , -0.02766433, -0.08733831], dtype=float32))
target position: [0.  0.1 0.7]

As shown the offset is still in centimeters.

Here is the script:

from omni.isaac.kit import SimulationApp

simulation_app = SimulationApp({"headless": True})
from omni.isaac.core.utils.extensions import enable_extension
# Default Livestream settings
simulation_app.set_setting("/app/window/drawMouse", True)
simulation_app.set_setting("/app/livestream/proto", "ws")
simulation_app.set_setting("/app/livestream/websocket/framerate_limit", 120)
simulation_app.set_setting("/ngx/enabled", False)

# Note: Only one livestream extension can be enabled at a time
# Enable Native Livestream extension
# Default App: Streaming Client from the Omniverse Launcher
enable_extension("omni.kit.livestream.native")

from omni.isaac.franka.tasks import FollowTarget
from omni.isaac.franka.controllers import RMPFlowController
from omni.isaac.core import World
import numpy as np


my_world = World(stage_units_in_meters=1.0)
my_task = FollowTarget(name="follow_target_task")
my_world.add_task(my_task)
my_world.reset()
task_params = my_world.get_task("follow_target_task").get_params()
franka_name = task_params["robot_name"]["value"]
target_name = task_params["target_name"]["value"]
my_franka = my_world.scene.get_object(franka_name)

rmpflow_controller = RMPFlowController(name="target_follower_controller", robot_articulation=my_franka)

#The underlying RmpFlow instance is retrieved from the RMPFlowController
rmpflow = rmpflow_controller.get_motion_policy()

#Set RmpFlow to perform an internal rollout of the robot joint state, ignoring updates from the simulator
rmpflow.set_ignore_state_updates(True)

#Set RmpFlow to visualize where it believes the robot to be
rmpflow.visualize_collision_spheres()

articulation_controller = my_franka.get_articulation_controller()

bad_proportional_gains = articulation_controller.get_gains()[0]/50
articulation_controller.set_gains(kps = bad_proportional_gains)

while simulation_app.is_running():
    my_world.step(render=True)
    if my_world.is_playing():
        if my_world.current_time_step_index == 0:
            my_world.reset()
        observations = my_world.get_observations()

        #This includes an internal call to RmpFlow.update_world()
        actions = rmpflow_controller.forward(
            target_end_effector_position=observations[target_name]["position"],
            target_end_effector_orientation=observations[target_name]["orientation"],
        )

        #The robot base pose will not be automatically tracked in this example
        #But it could be using the rmpflow object.

        articulation_controller.apply_action(actions)
        print("end position:", my_franka.end_effector.get_world_pose())
        print("target position:", observations[target_name]["position"])
        simulation_app.update()

simulation_app.close()

@newuhe I notice two things from this script

  1. See the lines
bad_proportional_gains = articulation_controller.get_gains()[0]/50
articulation_controller.set_gains(kps = bad_proportional_gains)

These are intentionally setting the PD gains to incorrect values to facilitate the usage of RmpFlow Debugging Features. You should copy from a different part of the RmpFlow tutorial. Try here: RmpFlow Tutorial Code Snippet

  1. In the Code Snippet I directed you to, you’ll see that RmpFlow is initialized like so
rmpflow = RmpFlow(
    robot_description_path = rmp_config_dir + "/franka/rmpflow/robot_descriptor.yaml",
    urdf_path = rmp_config_dir + "/franka/lula_franka_gen.urdf",
    rmpflow_config_path = rmp_config_dir + "/franka/rmpflow/franka_rmpflow_common.yaml",
    end_effector_frame_name = "right_gripper",
    evaluations_per_frame = 5
)

Notice the line: end_effector_frame_name="right_gripper"
The RmpFlow algorithm uses the Franka’s URDF file as its source of truth, rather than USD. You can set any valid frame in the URDF to be the end effector that RmpFlow should use. The “right_gripper” frame is a frame that we added to the Franka URDF that does not exist on the USD stage. As you have encountered, this can be a little misleading. The line my_franka.get_end_effector is returning the USD Prim at the path “panda/panda_rightfinger”, which does not exactly match the frame that RmpFlow is using for position control. To get the end effector position that RmpFlow is using, you should use the line:

rmpflow_end_effector_prim = rmpflow.get_end_effector_as_prim()

The rmpflow_end_effector_prim will then track the position of the end effector over time, which you can query with rmpflow_end_effector_prim.get_world_pose().

One of our goals for the near future is to use USD as the single source of ground truth, which will eliminate this form of inconsistency, but for now, RmpFlow (like other common implementations of manipulation planning and control algorithms) relies on a URDF file.