Hello, I have some issues and some questions I would like to ask.
First of all, I have problems with Nvblox and the map clearing, in my config file I set the parameters decay_tsdf_rate_hz: 0.0 and map_clearing_radius_m: 0.0 # no map clearing if < 0.0, but when I visualize the mesh on foxglove the old part of the created map (not visible on camera anymore) is deleted. How can I fix this issue?
I was also wondering if I reached the maximum capabilities of the Nvidia Jetson Nano Orin Super, right now I am running a stack composed of ros2_control, sllidar_ros2, VSLAM, Nvblox, Nav2 and isaac_ros_yolov8 node with yolov8 nano model. (with this I have all of the 4 cpu cores at 90%, and a ram usage of like 4/5GB out of 8)
Second of all, is nvblox with people_segmentation mode supported on this board? Because if I try to launch it I get a “ran out of memory” error, and the node doesn’t start (while disabling yolov8 node).
Could it be possibile to implement the Isaac_ros_obj_segmentation node alongside of my pipeline and yolov8? Or i will run out of memory? I would like to have different inflations layer on nav2 for the people if detected, like 1 or 2 meters inflation radius instead of 0.5 for objects.
For this purpose it will be better to implement directly the yolov8_segmentation model using the isaac_ros_yolov8 node and loose the ability for object recognition?
Thank you very much for your help, Edoardo.
Hello @rdedo99,
To disable radius-based map clearing, you need to set the config like this:
map_clearing_radius_m: -1.0 # or any negative value, no map clearing if < 0.0
clear_map_outside_radius_rate_hz: 0.0
decay_tsdf_rate_hz: 0.0
decay_dynamic_occupancy_rate_hz: 0.0
VSLAM + nvblox + DNN inference are indeed all GPU-heavy. To run this kind of full Perceptor-style stack on Nano, you’ll need to compromise with lower resolution, frame rates, or smaller/fewer models.
The people_segmentation mode adds a heavy segmentation model, which is why you’re seeing “ran out of memory.” We recommend against running both YOLOv8 and isaac_ros_obj_segmentation on Nano. Instead, consider trying a single segmentation model (e.g., YOLOv8 segmentation) and connecting its person masks to a custom Nav2 costmap layer with a larger inflation radius for people.
Thank you very much for your answer,
I have modified the parameters you told me, but the situation didn’t change, I still see that the old voxels are being canceled for the visualization from foxglove.
For using Yolov8 segmentation Is possible to use the isaac_ros_yolov8 package with the engine file path set to the yolov8-seg model or I have to implement a different package like the unet one with PeopleSemSegNet AMR? Or I have to integrate the yolo v8 model inside the unet package?
Thank you, Edoardo.
Could you first check what values NVBlox is actually using by running these commands inside your running system?
ros2 param get /nvblox_node map_clearing_radius_m
ros2 param get /nvblox_node clear_map_outside_radius_rate_hz
ros2 param get /nvblox_node decay_tsdf_rate_hz
ros2 param get /nvblox_node decay_dynamic_occupancy_rate_hz
If any of those come back as positive / non‑zero, it means another param file or launch fragment is overriding your settings, and you’ll need to update that config too.
Also, isaac_ros_yolov8 only supports detection, so you can’t just point engine_file_path to a YOLOv8‑seg model and get segmentation masks from that GEM.
For people segmentation with the existing framework you have two realistic options:
-
Use PeopleSemSegNet via the isaac_ros_image_segmentation / isaac_ros_unet pipeline (possibly at lower resolution / INT8) and feed its person mask into Nav2.
-
If you really want YOLOv8‑seg, export it to ONNX and build a TensorRT engine, then write a small node that runs the engine, decodes the masks into a sensor_msgs/Image topic, and wire that topic into Nav2.
I checked the 4 parameters and every one was set correctly as you told me, but still in Foxglove I couldn’t see the full map without the older readings being canceled. For yolo I have created instead a custom node that publishes a circle on the floor.
I have another question, is it possible that the yolov8n plan is “heavier” than the yolov8s one? Because with tensorrt plan and the “n” model sometimes I had GFX Out of Memory error, but with the “s” model that never happened. Maybe this is caused on how tensorrt creates the .plan file? Or it is something else? (the stack is always the same, only difference is the yolo model)
As always thank you for your answers!