Apply joint position for Fetch robot

Hello, I’m quite new to Isaac Sim and just learned it for about two weeks. I am trying to control a Fetch robot with Lula IK solver to accomplish a pick and place task, and at first I want to control the arm of Fetch with base fixed. I am trying to run the above task in the Hello World extension by modifying the setup_scene, setup_post_load and send_robot_actions method.

Only 8 of the joint of Fetch is controlled in my case whose indices are [2, 4, 6, 7, 8, 9, 10, 11], and I can get the target joint configurations with lula.compute_ik_ccd. However, when I try to set the joint positons by dynamic_control.set_articulation_dof_position_targets, I find that only parts of the joint is moved, while I expect all of the joints should move a little since all elements in target joint configuration is different from the initial ones. And for the result, the gripper_link (I set it as the end effector) does not stop at the correct goal position. I wonder if I deploy the joint configurations in the corrent way, or the problem is in the IK solver configuration. Here is some of my code:

    def setup_scene(self):
        world = self.get_world()
        world.scene.add_default_ground_plane()
        fetch_robot_path = "/fetch.usd"
        fetch_robot_prim_path = "/World/Fetch_Robot"
        add_reference_to_stage(usd_path=fetch_robot_path, prim_path=fetch_robot_prim_path)
        self._fetchbot = world.scene.add(Robot(
            prim_path=fetch_robot_prim_path, 
            name="fetchbot",
            position=np.array([.0, .0, .8]),
        ))

    async def setup_post_load(self):
        self._world = self.get_world()
        self._fetchbot = self._world.scene.get_object("fetchbot")
        self._world.add_physics_callback("sending_actions", callback_fn=self.send_robot_actions)

        self._dc = _dynamic_control.acquire_dynamic_control_interface()
        self._fetchbot_handle = self._dc.get_articulation("/World/Fetch_Robot")
        self._dc.wake_up_articulation(self._fetchbot_handle)
        default_joint_pos=self._dc.get_articulation_dof_states(self._fetchbot_handle, _dynamic_control.STATE_POS)["pos"]

        self._fetchbot_control_idx = np.concatenate([np.array([2]), np.array([4, 6, 7, 8, 9, 10, 11])])
        self._fetchbot_controller = IKSolver(
        robot_description_path="/root/fetch/fetch_descriptor.yaml",
        robot_urdf_path="/root/fetch/fetch.urdf",
        default_joint_pos=default_joint_pos[self._fetchbot_control_idx],
        eef_name="gripper_link",
        )
    
    def send_robot_actions(self, step_size):
        target_pos = np.array(targets[target_index])
        joint_pos = self._fetchbot_controller.solve(
            target_pos=np.array([0.5, 0.5, 0.1]),
        )

        if joint_pos is not None:
            dof_states = self._dc.get_articulation_dof_states(self._fetchbot_handle, _dynamic_control.STATE_POS)
            new_positions = dof_states["pos"]
            new_positions[self._fetchbot_control_idx] = joint_pos
            dof_states["pos"] = new_positions
            self._dc.set_articulation_dof_position_targets(self._fetchbot_handle, new_positions.astype(np.float32))
        else:
            carb.log_warn("fetchbot: IK did not converge to a solution.  No action is being taken.")


class IKSolver:
    """
    Class for thinly wrapping Lula IK solver
    """

    def __init__(
        self,
        robot_description_path,
        robot_urdf_path,
        eef_name,
        default_joint_pos,
    ):
        # Create robot description, kinematics, and config
        self.robot_description = lula.load_robot(robot_description_path, robot_urdf_path)
        self.kinematics = self.robot_description.kinematics()
        self.config = lula.CyclicCoordDescentIkConfig()
        self.eef_name = eef_name
        self.default_joint_pos = default_joint_pos

    def  solve(
        self,
        target_pos,
        target_quat=None,
        tolerance_pos=0.002,
        tolerance_quat=0.01,
        weight_pos=1.0,
        weight_quat=0.05,
        max_iterations=150,
        initial_joint_pos=None,
    ):
        """
        Backs out joint positions to achieve desired @target_pos and @target_quat

        Args:
            target_pos (3-array): desired (x,y,z) local target cartesian position in robot's base coordinate frame
            target_quat (4-array or None): If specified, desired (x,y,z,w) local target quaternion orientation in
            robot's base coordinate frame. If None, IK will be position-only (will override settings such that
            orientation's tolerance is very high and weight is 0)
            tolerance_pos (float): Maximum position error (L2-norm) for a successful IK solution
            tolerance_quat (float): Maximum orientation error (per-axis L2-norm) for a successful IK solution
            weight_pos (float): Weight for the relative importance of position error during CCD
            weight_quat (float): Weight for the relative importance of position error during CCD
            max_iterations (int): Number of iterations used for each cyclic coordinate descent.
            initial_joint_pos (None or n-array): If specified, will set the initial cspace seed when solving for joint
                positions. Otherwise, will use self.default_joint_pos

        Returns:
            None or n-array: Joint positions for reaching desired target_pos and target_quat, otherwise None if no
                solution was found
        """
        pos = np.array(target_pos, dtype=np.float64).reshape(3, 1)
        rot = np.array(quats_to_rot_matrices(np.array([0, 0, 0, 1.0]) if target_quat is None else target_quat), dtype=np.float64)
        ik_target_pose = lula.Pose3(lula.Rotation3(rot), pos)

        # Set the cspace seed and tolerance
        initial_joint_pos = self.default_joint_pos if initial_joint_pos is None else np.array(initial_joint_pos)
        self.config.cspace_seeds = [initial_joint_pos]
        self.config.position_tolerance = tolerance_pos
        self.config.orientation_tolerance = 100.0 if target_quat is None else tolerance_quat
        self.config.position_weight = weight_pos
        self.config.orientation_weight = 0.0 if target_quat is None else weight_quat
        self.config.max_iterations_per_descent = max_iterations

        # Compute target joint positions
        ik_results = lula.compute_ik_ccd(self.kinematics, ik_target_pose, self.eef_name, self.config)
        return np.array(ik_results.cspace_position)

This doesn’t seem likely to be an issue with the IK solver since the IK solver is successfully returning a solution. (As an aside, the IK solver is not meant to be used to directly generate controls for a robot. It simply finds a joint-space solution to a task space query, but will not generate a path to get the robot there).

This seems to be a problem with getting the robot to converge to a joint-space command. In an effort to produce minimal reproductions steps, I would encourage creating a script that sets the joint targets to a particular value and then reads the joint positions.

The dynamic control interface is deprecated and should be replaced with the Isaac Core Articulation interface.

Here is a snippet that can be pasted in the script editor for use with the Universal Robot UR16e robot (that can be loaded from the menu using “Create” → “Isaac” → “Robots” → “Universal Robot” → “UR16e”.

This shows how to apply joint position target to either all joints or to a subset of joints. It also shows how to query and print joint positions to console.

I would start with running similar experiments on your robot and ensure that joint-level control is working as expected.

from omni.isaac.core.articulations import Articulation, ArticulationSubset
from omni.isaac.core.utils.types import ArticulationAction
import numpy as np

# Load robot
robot = Articulation("/UR16e")
robot.initialize()
print("DOF names:", robot.dof_names)

# Set position for all joints
robot.apply_action(ArticulationAction(np.array([1.0, 0.0,2.0,3.0,0.0,0.0])))

# Print position
position = robot.get_joint_positions()
print("position:", position)

# Set position for a subset of joints
subset_names = ['shoulder_pan_joint', 'shoulder_lift_joint'] 
robot_subset = ArticulationSubset(robot, subset_names)
robot_subset.apply_action([2.0, 2.0])

# Print position
position = robot.get_joint_positions()
print("position:", position)

Expected output in console:

DOF names: ['shoulder_pan_joint', 'shoulder_lift_joint', 'elbow_joint', 'wrist_1_joint', 'wrist_2_joint', 'wrist_3_joint']
position: [ 1.9998459e+00  1.9971803e+00  1.9991046e+00  2.9993553e+00
  2.4549900e-05 -1.1279739e-08]
position: [ 1.9998459e+00  1.9971803e+00  1.9991046e+00  2.9993553e+00
  2.4549900e-05 -1.1279739e-08]

Once the joints are working as expected and you are ready to try using IK, this guide is likely to be helpful:
https://docs.omniverse.nvidia.com/isaacsim/latest/advanced_tutorials/tutorial_motion_generation_lula_kinematics.html#using-the-lulakinematicssolver-to-compute-forward-and-inverse-kinematics

Thanks for your reply!This week I’ve also had a try for the Articulation interface to achieve the same task above, just with the guide you provided and Franka’s FollowTarget task as my reference.

It seems the action is successfully applied to the arm of Fetch, and according to the output of compute_end_effector_pose , the end effector seems to be right at the target position, but actually there’s a always a deviation between the actual position of end effector and my target position. I suspect the deviation is caused by the incorrect robot base pose, but I don’t know how to fix that. By the way, the same implementation works as expected for Franka, the deviation only exists on Fetch Robot.

Could you please have a look at the problem in that post?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.