Hi @shentunghuang1 ,
I still have to try the examples for Franka in this Isaac ROS version, but I noticed this on your log:
[ERROR] [ros2_control_node-5]: process has died [pid 3906, exit code -11, cmd '/opt/ros/jazzy/lib/controller_manager/ros2_control_node --ros-args --params-file /opt/ros/jazzy/share/kinova_gen3_7dof_robotiq_2f_85_moveit_config/config/ros2_controllers.yaml -r /controller_manager/robot_description:=/robot_description'].
It seems to be trying to load the controller for Kinova robots instead of Franka, maybe some error in the launcher stack?
For UR robots it gives:
[ERROR] [launch]: Caught exception in launch (see debug for traceback): "package 'ur_robot_driver' not found
You either miss the dependency or there could be some mixup. I could successfully launch the example using both ur_robot_driver and ur_description but you’d need to modify the ur_sim_macro.xacro file for the latter.
I just reinstalled it all in Thor with Jetpack 7.1 and, a part from some issues with DNS from inside the isaac-ros container, it still worked fine.
The way I install it is by leveraging some bootstrap scripts I made to simplify setting up all the required env variables before building the image so I can have all dependencies installed in the container and all Isaac ROS packages built from source.
You find the instructions HERE, but to simplify:
cd ${ISAAC_ROS_WS}/src
git clone https://github.com/pastoriomarco/isaac_ros_custom_bringup
- Setup the config files for isaac-ros CLI to build from source al isaac ROS packages:
bash ${ISAAC_ROS_WS}/src/isaac_ros_custom_bringup/isaac_ros_4/scripts/bootstrap_isaac_ros_cli_files.sh --source
- Build isaac-ros container from local dockerfile and configs:
isaac-ros activate --build-local
Once inside you can apply the fixes with:
bash ${ISAAC_ROS_WS}/src/isaac_ros_custom_bringup/isaac_ros_4/scripts/apply-isaac-manipulation-fixes.sh
Then you can follow official instructions to launch the packages/examples. If you have Isaac Sim I suggest you to start from Pick and Place — Isaac ROS, as this page also has the instructions to set up the config files that are used in other examples too.
Also, you’ll have to set these env variables each time you restart the container, even if the config files are there already:
# Point to the directory containing your configuration files
export ISAAC_MANIPULATOR_WORKFLOW_CONFIG_DIR="${ISAAC_ROS_WS}/isaac_manipulator_config"
export ISAAC_MANIPULATOR_PICK_AND_PLACE_CONFIG_DIR="${ISAAC_ROS_WS}/isaac_manipulator_config"
To make that example work you’ll have to start the Isaac Sim scene as instructed in the example page. Last time I checked, there’s a problem loading the gripper: if it’s not fixed yet, you can find a workaround here.
PLEASE NOTE: About building all packages from source, consider that isaac_ros_custom_bringup’s bootstrap script includes FoundationStereo by default. If you have less than 12/16GB VRAM the build may fail. I just rebuilt it all and isaac_ros_foundationstereo_models_install package used up to almost 10GB VRAM during build.
If it does fail, once inside the container you can install FoundationStereo packages from apt-get and use colcon build --packages-up-to with the missing packages, but you’ll have to install these packages each time you start the container unless you modify the dockerfiles.
The whole process is customizable both from the scripts I made and from isaac-ros-cli config files, so this coul be automated too. I made it to have everything I need ready and I didn’t take much time to cover various edge cases. Since you have Thor too you may check if it works in that platform first.
It’s not perfect and it can surely be made lighter/better, but I just needed something that allows me to spend my time testing Isaac ROS packages and not setting up the container. I’ll refine if/when needed!
PS: if you use all default DDS settings (like me), please remember to disable the WiFi at least on one of the machines involved in the interaction between Isaac Sim and Isaac ROS, otherwise they’ll broadcast to all networks and jam the execution. Sometimes it still works, but most of the times there’s partial or no communication and nodes crash, at least on my setup.