How can I develop a complete visual navigation system using the ISAAC ROS ecosystem on my own G1 robot within my own scenario?

I want to develop a complete robot visual navigation system using ISAAC ROS.

My camera is a RealSense D435i, and my robot is a Unitree G1. Currently, I can generate local voxel maps using NVBlox and have completed RealSense Visual SLAM tutorials, but I don’t know how to integrate them with NAV2 for navigation.

In the normal development process, I would first let the robot run a lap to build a global map, then display the global map using RViz, and use NAV2 for navigation testing. But how do I develop this using ISAAC ROS?

I noticed three related libraries: NVBlox, Visual SLAM, and Mapping And Localization. I don’t know how to use them.

My questions are:

  1. I want to implement a robot system with fixed-position navigation and real-time obstacle avoidance. Which libraries do I need? Is it sufficient to use only NVBlox + VSLAM?

  2. The nvblox, visual SLAM, and Mapping And Localization libraries all have map saving capabilities. What are the uses of these maps?

  3. Does nvblox + vslam require building a global map for navigation? If not, is the map saved by vslam (*.mdb) used? If so, is the global map generated using the Mapping And Localization library?

  4. I’ve successfully completed the official tutorials for nvblox (quickstart, sim, realsense) and vslam (quickstart, realsense). My next step is to figure out how to use it in my own scenario, on a G1 robot. Should I use sim simulation, RViz simulation, or can I directly use a real robot?

Hello @fhqjwyk,

Reply to your questions below.

  1. VSLAM gives you your position, and NVBlox gives you a map, but you need NAV2 to decide where to go (path planning) and how to get there (controller).
    You could check the flow graph provided in this page Isaac ROS Nvblox — isaac_ros_docs documentation

  2. Visual SLAM map is used to estimate the robot’s pose relative to the map origin.
    NVBlox map allows local obstacle avoidance (at runtime) or offline global map generation.
    It can be used to generate 2D costmaps for NAV2.
    Mapping And Localization is usually for global map creation. The final global map used by NAV2 is usually derived from an offline process involving both VSLAM and NVBlox.

  3. ISAAC ROS navigation pipeline works in a real-time local sensing loop:

    1. VSLAM continuously provides an estimated pose (Odometry).

    2. NVBlox uses this pose to reconstruct the immediate environment (local map).

    3. NVBlox publishes a 2D Distance Map Slice (a costmap) that feeds directly into the NAV2 local planner.

    This allows the robot to navigate and avoid obstacles without a pre-built global map.

  4. If you have a Unitree G1 and a RealSense D435i on a Jetson Orin, you could directly integrate the ROS nodes on the Orin/G1. Since this complex implementation requires integration of multiple ROS nodes, it’s strongly recommended to verify each node’s behavior individually first to make debugging easier.
    You might need to create a launch file that includes all needed nodes, such as below listed features.

    • realsense2_camera node (to publish RGB, Depth, and IMU data).

    • isaac_ros_visual_slam node (gets pose).

    • isaac_ros_nvblox node (gets local map).

    • NAV2 stack (gets pose and local costmap).

    • Unitree G1 ROS driver (sends/receives commands).

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.