Hi there,
I’m trying to take a side-by-side stereo video feed, run through ESS Disparity and VSLAM nodes, and then feed the outputs into NVblox to get a 3D reconstruction in realtime.
I’ve noticed that when the VSLAM node successfully performs loop closure during SLAM, only the robot’s position updates and the nvblox-generated mesh does not adjust from the loop closure / realign.
Does Isaac ROS Nvblox support loop closure? If so, how can I achieve this?
Below is how I am launching the VSLAM and nvblox nodes:
visual_slam_node = ComposableNode(
name='visual_slam_node',
package='isaac_ros_visual_slam',
plugin='nvidia::isaac_ros::visual_slam::VisualSlamNode',
remappings=[('stereo_camera/left/image', '/left/image_raw_grayscale'),
('stereo_camera/left/camera_info', '/left/camera_info'),
('stereo_camera/right/image', '/right/image_raw_grayscale'),
('stereo_camera/right/camera_info', '/right/camera_info')],
parameters=[{
'enable_rectified_pose': True,
'denoise_input_images': False,
'rectified_images': False,
'img_jitter_threshold_ms': 110.00, # default is 33.33 (30fps). Input to VSLAM is ~23 fps for gscam mode. On Orin, ~10fps
'enable_slam_visualization': True,
'enable_observations_view': True,
'enable_landmarks_view': True,
"enable_localization_n_mapping": True,
"enable_imu_fusion": False,
'map_frame': 'map',
'odom_frame': 'odom',
'base_frame': 'base_link',
"input_left_camera_frame": "stereocamera_left_frame",
"input_imu_frame": "imu",
}],
)
# Nvblox node
nvblox_node = ComposableNode(
name='nvblox_node',
package='nvblox_ros',
plugin='nvblox::NvbloxNode',
parameters = ["/workspaces_2/isaac_ros-dev/src/custom_package/config/nvblox_params.yaml"],
remappings=[
('color/image', '/left/image_rect_resize'),
('color/camera_info', '/left/camera_info_rect_resize'),
('depth/camera_info', '/left/camera_info_rect_resize'),
('depth/image', '/depth'),
# 'transform' and 'pose' topics not required, since params.yaml has use_tf_transforms=true
]
)
and the nvblox_params.yaml
:
/**:
ros__parameters:
# miscellaneous
global_frame: "map"
voxel_size: 0.02 # voxel size in cm
use_tf_transforms: true
# multi mapper
mapping_type: "static_tsdf" # ["static_tsdf", "static_occupancy"]
connected_mask_component_size_threshold: 2000
# esdf settings
compute_esdf: false
publish_esdf_distance_slice: false
# mesh settings
compute_mesh: true
mesh_update_rate_hz: 30.0
# color settings
use_color: true
max_color_update_hz: 30.0
# depth settings
use_depth: true
max_depth_update_hz: 30.0
# lidar settings
use_lidar: false
# Input queues
max_poll_rate_hz: 30.0
maximum_sensor_message_queue_length: 30
# Map clearing settings
map_clearing_radius_m: -1.0 # no map clearing if < 0.0
map_clearing_frame_id: "base_link"
# QoS settings
depth_qos: "SENSOR_DATA"
color_qos: "SENSOR_DATA"
static_mapper:
# projective integrator (tsdf/color/occupancy)
projective_integrator_max_integration_distance_m: 2.0
projective_integrator_truncation_distance_vox: 50.0
weighting_mode: "inverse_square"
projective_integrator_max_weight: 1000.0
# mesh integrator
mesh_integrator_min_weight: 100.0
# tsdf decay integrator
tsdf_decay_factor: 0.95