Hi, I am asking for guidance with my current problems regarding a personal robotics project. Though I tried using ROS2 in the past, but I guess I could consider myself new to it. What I am trying to accomplish is to use Isaac ROS VSLAM, NVBLOX to build a map of the environment, save it and then load it and localize in it and finally use NAV2 for navigation. I already tried everything from the Isaac ROS documentation and it seems to work with the Realsense D435i camera alone, but I am failing to figure out how to put everything together to make it work with my robot for example:
Should I be only saving the map from NVBLOX or also from VSLAM to successfully load and localize later on?
How to use the NVBLOX NAV2 plugin?
How could I switch between using odometry coming from wheel encoders or from VSLAM to generate transforms for the robot?
When I was using VSLAM alone I could echo the topics of it, but when using NVBLOX the VSLAM topics show up, but I wasn’t able to echo them. So how to know if NVBLOX is working as it should be?
Does NVBLOX take additional 2D LIDAR data and what would be the role of it?
My dev machine and robot setup is as follows:
Dev machine - Desktop PC running Ubuntu 22.04, ROS2 Humble;
Robot - Laptop running Ubuntu 22.04, ROS2 Humble with GTX 1650 GPU, differential robot with 2 DC motors with encoders and 2 caster wheels, sensors are Intel Realsense D435i, RPLidar A1;
Welcome to the Isaac ROS forum. Thank you for your detailed message. I will reply to all your questions. Feel free to respond if something is incomplete.
The roles of the two maps are entirely different.
The NVBlox map is designed for external use, an example is NAV2 with the constmap
For your use case, I suggest saving both maps. This will enable the robot to promptly localize the environment as both algorithms have a filled database.
Remember to load both maps before the algorithms start.
I’m looking internally for a more detailed documentation about this package, I will provide soon as possible
We don’t provide a way to switch between two different odometries. In your case, I suggest writing an own ROS 2 node able to read both inputs and generate the right output for your robot.
A practical way that you can use is described on our documentation
A practical approach to tracking odometry is to use multiple sensors with diverse methods so that systemic issues with one method can be compensated for by another method. With three separate estimates of odometry, failures in a single method can be detected, allowing for fusion of the multiple methods into a single higher quality result. VSLAM provides a vision- and IMU-based solution to estimating odometry that is different from the common practice of using LIDAR and wheel odometry.
The output is always available on topics, how did you set up your environment? Are you using our demos?
No, at this time nvblox works only with depth cameras, but we are working for a future implementation.
There is Isaac ROS Map localization that works with 2D lidars:
Thank you for providing some clearance. I have some follow up questions.
I know that this question isn’t associated with Isaac ROS, but rather plain ROS2, but what I am struggling to understand is how to use, for example, odometry from VSLAM and make my robot model move according to it? It works just fine with wheel odometry, but I am trying to figure out how to configure ROS2 control, robot state publisher or something else, to accomplish this.
I followed all the necessary steps in the documentation to setup nvblox, but I encountered one inconvienence when using the dev container. Every time I launch the container I need to install the nvblox dependencies for it to work with the command
sudo apt update &&
sudo apt-get install -y ros-humble-isaac-ros-nvblox &&
rosdep update &&
rosdep install isaac_ros_nvblox
Otherwise it doesn’t work. So maybe I missed some kind of step while setting everything up, but when I launch nvblox it seems that the map is being generated and I don’t think it would work without vslam publishing its topics. I’ll try to redo the setup, but maybe you’ll have suggestions on what further information I should provide on this?
I managed to get nvblox properly working by following the build from source sections of isaac ros nvblox and visual slam, but still there is this inconvenience of having to install vslam dependencies every time I start the container with the following command:
rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_visual_slam --ignore-src -y
I maneged to figure out how to make this work. I just needed to change a diff_drive_controller/DiffDriveController argument called enable_odom_tf to False in the ros2 control configuration .yaml file. This way I can switch between using wheel or vslam odometry.
Aside from that I encountered another problem. Any suggestions why vslam node fails to launch if I launch nvblox with vslam imu fusion enabled?
Hi, I have got another question. How to use the pose obtained from /visual_slam/load_map_and_localize action to set the pose of the robot or should vslam set it automaticaly? I’ve tried using the /visual_slam/set_slam_pose service, but nothing seems to happens, the robot pose in rviz is still wrong according to the loaded map.
Also you mentioned:
I am loading them after the algorithms start, because the services and actions do not exist before, so I’m not sure what you meant here. Other than the robot pose problem the map loads successfully.
I ended up just using isaac vslam odometry for my project, everything else is slam toolbox for mapping, amcl for localisation and nav2 for navigation. Similar as in the image bellow:
I also use ekf filter from robot localisation package to fuse visual, inertial and wheel odometry. Also in the nav2 voxel and obstacle avoidance layers i also use pointcloud data.