I have a mobile robot on which I have been experimenting and making developments using various methods. I purchased a Jetson Orin NX to also explore the Isaac ROS platform. Currently, I have a RealSense D455 camera at hand. In addition to these, my robot features wheel odometry and lidar support.
I have completed the basic setups for Isaac ROS and conducted some tests (Nvblox, VSLAM). I believe I have obtained satisfactory results for the initial step.
Now, I want to move on to the navigation step, but I’m starting to get confused at this stage. In traditional methods, mapping is done with lidar and odometry, followed by navigation based on this map. However, Nvblox comes into play initially here. As far as I understand, the purpose of Nvblox is to generate a 3D obstacle map of the robot’s immediate surroundings. From my experience, the map starts to disappear after a while, so the aim seems to be to use it more like an instant local costmap rather than creating a permanent map.
Regarding VSLAM, instantaneous localization is done without a pre-registered environment, but I haven’t been able to create any map. Are we obtaining a map in the familiar 2D format in ROS, or is it in the Nvblox format but a permanent map?
Given this scenario, what approach would you recommend for mapping and navigation? I couldn’t find any explanatory information on these topics in the documents I’ve reviewed so far. If there is any, I would appreciate it if you could share it.
How effective is a single camera and the mentioned lidar and wheel odometry? Can a reliable operation be achieved with these, or are there additional required cameras or sensors?
Regarding VSLAM, instantaneous localization is done without a pre-registered environment, but I haven’t been able to create any map. Are we obtaining a map in the familiar 2D format in ROS, or is it in the Nvblox format but a permanent map?
Isaac ROS vlsam instantaneously localizes and, with the cuVSLAM algorithm, localizes the robot on an environment. Isaac ROS vslam do this task with or without a pre-registered map.
A 2D format in ROS is not available only using Isaac ROS Nvblox, but you can use the topic:
~/static_occupancy - sensor_msgs/PointCloud2 - A pointcloud of the static occupancy map (only voxels with occupation probability > 0.5). Set static_occupancy_publication_rate_hz to control its publication rate.
And convert the output in a nav_msgs::msg::OccupancyGrid message
Given this scenario, what approach would you recommend for mapping and navigation? I couldn’t find any explanatory information on these topics in the documents I’ve reviewed so far. If there is any, I would appreciate it if you could share it.
The best way is running Isaac ROS vlsam and Isaac ROS nvblox to localize and map the environment, building a pipeline this way:
How effective is a single camera and the mentioned lidar and wheel odometry? Can a reliable operation be achieved with these, or are there additional required cameras or sensors?
Isaac ROS vslam uses only the input coming from a stereo camera and IMU (optional) and doesn’t work with a single camera, lidar or wheel odometry, but you can use this data with a Kalman filter and use this output such an input for Isaac ROS nvblox and recontruct a 3D scene.
First of all, thank you for your detailed response.
You mentioned that VSLAM can be optionally recorded. So, what should I base this recording on? Is it sufficient to just navigate my vehicle through the environment once? Since there isn’t any visual indication on the screen that it’s mapping the environment, I’m unsure if any process is occurring, and whether I should wait a bit or move the vehicle around. What method should I use for this?
Also, there’s a value related to accuracy. Are the Translation and Rotation errors mentioned here instantaneous error values, and if so, is there any data related to the accumulated error when a vehicle returns to its starting position after being navigated?
When mapping with VSLAM, some points belonging to a location disappear after passing through it. Do I need to wait a bit in front of obstacles while mapping, or is it sufficient to pass quickly? What method should I use for this?
What is the difference in accuracy between using and not using an IMU, and is there a measurement for this? Or is this difference significant?
Thank you @aozkilic for the excellent questions regarding navigating with vslam/nvblox. I too am struggling to grasp how a saved vslam map relates with or works with an nvblox saved map. In the end, I’d like to build an autonomous AMR and give users of the AMR a map to set navigation points. I have what I believe is a successful vslam map of my space that I included below, but like you, am not sure what metric to use to confirm the space is completely “slammed”.
I am also able to use the action LoadMapAndLocalize to successfully get localized on the map above. Unfortunately, I am not able to run vslam and nvblox at the same time, as my Jetson Orin Nano simply runs out of memory (also have a zedx camera). Were I to be running something beefier though, I presume my next step would be to obtain an nvblox map of the same space previously vslam’d. @Raffaello does the above sound correct to you? And if so, which of the nvblox map saving actions would be the right one for generating a 2d cost map? I don’t mean to hijack your thread @aozkilic, but sounds like we’re struggling with the same things.
Thank you for your comment; the feedback from someone else working on this matter is very valuable to me. I also believe that your questions here would be helpful for further clarification and understanding of the topic.
To clarify, I would like to specify what Isaac ROS vslam does.
Isaac ROS vslam does not generate an obstacle map. It creates an internal landmark map that is used for localization. You can store this map locally, and it can be quickly used when the node starts up.
Although the IMU improves accuracy, it can only help for a brief period of time when the video input is of poor quality, and tracking could fail.
If you need to build an obstacle map, Isaac ROS nvblox is the appropriate package.
Thanks @Raffaello . Could you provide some detail on the map saving and loading features of NVBlox? I cannot find documentation on this and not sure what exactly these saved maps are used for and why or when you would load one.
Thank you, everyone, for your insightful questions and clarifying responses. I’m intrigued by the concept of integrating Isaac nvBLOX into a drone. Could you shed light on the feasibility of incorporating sensors or obtaining information about objects in the drone’s surroundings, including above, below, and beside it? I’m presently engaged in a project focusing on autonomous drones within warehouse environments. My attempt to implement ROS2 has proven challenging, as stability has been elusive. Any guidance or suggestions on navigating these challenges would be greatly appreciated.