UR10 path planning

1. I’d like to perform path planning using a UR10 robot instead of Franka.
Q1) In my first attempt, I tried to create a user example for manipulator path planning and import a UR10 robot, but it didn’t work. I would like to know how to import a UR10 robot into this example.
Q2) I’d like to learn how to add a new target object to the example mentioned above.
Q3) As a second approach, I imported the UR10 robot from the “import robots” section. I want to open the code for this file and integrate the path planning code. Is that possible?

2. I want to create paths using the Artificial Potential Field (APF) path planning algorithm instead of the default one.
Q4) Similarly, I’m wondering if it’s possible to use the APF path planning algorithm in the manipulation path planning examples. Please provide guidance on how to do this.

1. I’d like to perform path planning using a UR10 robot instead of Franka.
Q1) In my first attempt, I tried to create a user example for manipulator path planning and import a UR10 robot, but it didn’t work. I would like to know how to import a UR10 robot into this example.
Q2) I’d like to learn how to add a new target object to the example mentioned above.
Q3) As a second approach, I imported the UR10 robot from the “import robots” section. I want to open the code for this file and integrate the path planning code. Is that possible?

What didn’t work when you tried to rewrite the example? Did you get error messages?

2. I want to create paths using the Artificial Potential Field (APF) path planning algorithm instead of the default one.
Q4) Similarly, I’m wondering if it’s possible to use the APF path planning algorithm in the manipulation path planning examples. Please provide guidance on how to do this.

If you want to bring your own planning and control algorithms, you can use the Isaac Articulation interface to send joint position commands for the robot.

Here is an example of how to use the Articulation interface for the UR10.

First I add a UR10 to the scene using “Create → Isaac → Robots → Universal Robots → UR10”

Then I press play to start the simulation and open the script window using “Window → Script Editor”

With the simulation running, you can then use a little script to see the names of each joint in the articulation, set target positions and read joint positions:

from omni.isaac.core.articulations import Articulation, ArticulationSubset
from omni.isaac.core.utils.types import ArticulationAction
import numpy as np

# Load robot
robot = Articulation("/UR10")
robot.initialize()
print("DOF names:", robot.dof_names)

# Set position for all joints
robot.apply_action(ArticulationAction(np.array([1.0, 0.0,2.0,3.0,0.0,0.0])))

# Print position
position = robot.get_joint_positions()
print("position:", position)

Here is my expected output from running it twice. Note the first time the position readout is near zero because the robot hasn’t had time to move directly after applying action in the same step. The second set of output very closely approximates the target position.

DOF names: ['shoulder_pan_joint', 'shoulder_lift_joint', 'elbow_joint', 'wrist_1_joint', 'wrist_2_joint', 'wrist_3_joint']
position: [ 1.2860776e-06  7.5394190e-03  1.6797542e-03 -1.4944457e-06
  1.0166268e-07 -6.6924440e-07]
DOF names: ['shoulder_pan_joint', 'shoulder_lift_joint', 'elbow_joint', 'wrist_1_joint', 'wrist_2_joint', 'wrist_3_joint']
position: [ 9.9996537e-01  3.5953731e-03  1.9993634e+00  3.0001130e+00
 -6.8680338e-06  3.4853120e-09]

This Articulation API can be used as part of a standalone python application (1.2. Isaac Sim Workflows — Omniverse IsaacSim latest documentation) or a custom extension (1.2. Isaac Sim Workflows — Omniverse IsaacSim latest documentation).

If you decide to use the custom extension route, there is a set of Isaac Sim extension templates that are a great launching off point: 3.1. Isaac Sim Extension Templates — Omniverse IsaacSim latest documentation

1 Like

image

Thank you! That was helpful.

I have an additional question. So far, I’ve been adding user samples to the location of the attached photo and writing new codes such as controllers based on the path planning example of isaac sim. (Not working properly)

Then, as you said, can we use the path search algorithm (ex. APF) to control the environment by using ur10 directly from python script? I wonder if I just need to use the articulation. (Obstacle setting, robot control, etc.)

Hi @ros_developer - Have you gone through the Motion Generation docs? can you point us at what particular step from the mentioned tutorial you are stuck, so a team can help you further?
https://docs.omniverse.nvidia.com/isaacsim/latest/advanced_tutorials/tutorials_advanced_motion_generation.html

I’ve come across a robot motion generation program called Lula, and I’ve seen an example of it utilizing the RRT (Rapidly-exploring Random Tree) algorithm in a section named '6. motion generation. My question is about the feasibility of using the APF (Artificial Potential Fields) path planning algorithm instead of RRT within the Lula Controller. Can the Lula Controller support the integration of the APF path planning algorithm as an alternative to RRT?

thanks. I read the whole ‘motion generation’ docs. and I’m still confused between Articulation and RMPflow, Lula. if I want path path-planned robot with other path-planning algorithms, then what should I do?

Articulation controller? RMPflow path planning? Lula RMPflow?

Lula has no direct support for Artificial Potential Fields (APF).

RMPflow is a reactive motion policy that has some concepts in common with potential fields. In RMPflow, each “RMP” is a policy that guides the robot towards an objective (more end effector towards target, avoid an obstacle, etc). These RMPs are combined through an RMPflow tree into a set of accelerations defining movement for each joint. The underlying theory is described here: [1811.07049] RMPflow: A Computational Graph for Automatic Motion Policy Generation

If you have a particular version of APF you are trying to implement, you have a lot of design choices.

If you already have a full implementation of APF, you may want to:

  1. Set joint commands using the Articulation interface, and
  2. read obstacle positions from the world and use them as inputs to your world model.

If you are trying to build your own implementation of APF “from scratch”, you may want to use some of the Lula library as building blocks. For example, Lula Kinematics provides computation for foward kinematics (including Jacobians) that may be helpful.

In either case, the existing controllers in Isaac Sim can serve as example of how to query the USD scene for obstacle positions and how to set joint targets for the robot. There is no “out of the box” integration for APF, so your best option is to read the existing examples to understand how Lula algorithms (RRT, RMPflow, etc) were integrated and use that as a potential reference when designing your own integration.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.