Nvblox not working in realtime

Hi,

I am trying to use isaac_ros_nvblox integrated with nav2 for planning and obstacle avoidance. For sensors I am using 2 ZED2i cameras for rgbd data.

I am facing a major issue where nvblox does not create-clear cost in real time and is very laggy almost (3-4s) late. I am attaching my config and launch file for your reference.

Any nudge in the right direction would be really appreciated to help us solve this issue.

Launch file

import os
import launch
import launch_ros
from ament_index_python.packages import get_package_share_directory
from launch_ros.actions import ComposableNodeContainer, Node
from launch_ros.descriptions import ComposableNode

def generate_launch_description():
    camera_ns = ['camera_bottom_back', 'camera_bottom_front']
    robot_ns = os.getenv('ROBOT_NAMESPACE', '/robot1')
    robot_ns_id = os.getenv('ROBOT_NAMESPACE_ID', 'robot1')

    nvblox_base_config = os.path.join(get_package_share_directory('navigation'), 'config', 'nvblox.yaml')
    nvblox_node = ComposableNode(
        name='nvblox_node',
        package='nvblox_ros',
        plugin='nvblox::NvbloxNode',
        namespace=f"{robot_ns}/nuc",
        remappings=[
            ('camera_0/color/image', f'{robot_ns}/{camera_ns[0]}/left/image_rect_color'),
            ('camera_0/color/camera_info', f'{robot_ns}/{camera_ns[0]}/left/camera_info'),
            ('camera_0/depth/image', f'{robot_ns}/{camera_ns[0]}/depth/depth_registered'),
            ('camera_0/depth/camera_info', f'{robot_ns}/{camera_ns[0]}/left/camera_info'),
            ('camera_1/color/image', f'{robot_ns}/{camera_ns[1]}/left/image_rect_color'),
            ('camera_1/color/camera_info', f'{robot_ns}/{camera_ns[1]}/left/camera_info'),
            ('camera_1/depth/image', f'{robot_ns}/{camera_ns[1]}/depth/depth_registered'),
            ('camera_1/depth/camera_info', f'{robot_ns}/{camera_ns[1]}/left/camera_info'),],
        parameters=[nvblox_base_config,
                    {'num_cameras': 2,
                     'use_lidar': False,
                     'use_color': False,
                     'use_depth': True,
                     'use_tf_transforms': True,
                     'map_clearing_frame_id': f'{robot_ns_id}/base_link',
                     'pose_frame': f'{robot_ns_id}/base_link',
                     'esdf_slice_bounds_visualization_attachment_frame_id': f'{robot_ns_id}/base_link',
                     'workspace_height_bounds_visualization_attachment_frame_id': f'{robot_ns_id}/base_link',
                     'global_frame': 'map',
                     'input_qos': 'SENSOR_DATA',
                     }],
    )

    nvblox_costmap_filter_node = ComposableNode(
        name='nvblox_costmap_filter_node',
        package='navigation',
        plugin='NvbloxCostmapFilter',
        namespace=f"{robot_ns}/nuc",
        parameters=[{'namespace': robot_ns,
                     'footprint_topic': "visualization_footprint",
                     }],
    )

    nvblox_occupancy_grid_filter_node = ComposableNode(
        name='nvblox_occupancy_grid_filter_node',
        package='navigation',
        plugin='NvbloxOccupancyGridFilter',
        namespace=f"{robot_ns}/nuc",
        parameters=[{'namespace': robot_ns,
                     'footprint_topic': "visualization_footprint",
                     }],
    )

    nvblox_launch_container = ComposableNodeContainer(
        name='nvblox_launch_container',
        namespace=f"{robot_ns}/nuc",
        package='rclcpp_components',
        executable='component_container',
        composable_node_descriptions=[
            nvblox_node,
            nvblox_costmap_filter_node,
            nvblox_occupancy_grid_filter_node
        ],
        output='screen'
    )

    return launch.LaunchDescription(
        [
            nvblox_launch_container
        ]
    )

Config

/**:
  ros__parameters:
    # General settings
    cuda_stream_type: 1
    back_projection_subsampling: 1 # no subsampling if == 1
    clear_map_outside_radius_rate_hz: 0.1
    connected_mask_component_size_threshold: 2000
    decay_dynamic_occucancy_rate_hz: 10.0
    decay_tsdf_rate_hz: 5.0
    decay_dynamic_occupancy_rate_hz: 10.0
    enable_mesh_markers: false
    esdf_and_gradients_unobserved_value: -1000.000
    esdf_mode: "2d" # ["2d", "3d"]
    integrate_color_rate_hz: 5.0
    integrate_depth_rate_hz: 30.0
    integrate_lidar_rate_hz: 15.0

    lidar_height: 1
    lidar_vertical_fov_rad: 0.524
    lidar_width: 1800

    map_clearing_radius_m: 0.01 # no map clearing if < 0.0
    max_back_projection_distance: 4.0
    mapping_type: "static_occupancy"  # ["static_tsdf", "static_occupancy", "dynamic"]
    publish_esdf_distance_slice: true
    publish_layer_rate_hz: 15.0
    publish_debug_vis_rate_hz: 2.0
    esdf_slice_bounds_visualization_side_length: 4.0

    tick_period_ms: 10 # Processing happens every n:th tick_period. <=0 means that no processing take place
    update_esdf_rate_hz: 15.0
    update_mesh_rate_hz: 0.0

    lidar_min_valid_range_m: 0.1
    lidar_max_valid_range_m: 20.0

    voxel_size: 0.05

    # printing statistics on integrate_depth_rate_hz
    print_rates_to_console: false
    print_timings_to_console: false
    print_delays_to_console: false
    print_statistics_on_console_period_ms: 10000

    use_non_equal_vertical_fov_lidar_params: false
    maximum_sensor_message_queue_length: 10

    # Visualization
    workspace_height_bounds_visualization_side_length: 4.0
    layer_streamer_bandwidth_limit_mbps: -1.0 # unlimited
    layer_visualization_exclusion_height_m: 2.0
    layer_visualization_exclusion_radius_m: 4.0
    layer_visualization_max_tsdf_distance_m: 0.05
    layer_visualization_min_tsdf_weight: 0.1
    layer_visualization_undo_gamma_correction: false

    static_mapper:
      # mapper
      check_neighborhood: true
      do_depth_preprocessing: false
      depth_preprocessing_num_dilations: 3

      esdf_integrator_max_distance_m: 0.15         #Buffer distance for ESDF
      esdf_integrator_max_site_distance_vox: 0.3
      esdf_integrator_min_weight: 0.3
      esdf_slice_height: 0.0
      esdf_slice_max_height: 1.2
      esdf_slice_min_height: 0.1

      max_tsdf_distance_for_occupancy_m: 0.1
      max_unobserved_to_keep_consecutive_occupancy_ms: 10

      # projective integrator (tsdf/color/occupancy)
      projective_integrator_max_integration_distance_m: 4.0
      projective_integrator_truncation_distance_vox: 3.0
      projective_integrator_weighting_mode: "inverse_square_tsdf_distance_penalty"
      projective_integrator_max_weight: 0.2

      # occupancy integrator
      free_region_occupancy_probability: 0.3
      free_region_decay_probability: 1.0
      occupied_region_occupancy_probability: 0.7
      unobserved_region_occupancy_probability: 0.5
      occupied_region_half_width_m: 0.05
      occupied_region_decay_probability: 0.4

      # view calculator
      raycast_subsampling_factor: 4
      workspace_bounds_type: "unbounded" # ["unbounded", "height_bounds", "bounding_box"]
      workspace_bounds_min_corner_x_m: 0.0
      workspace_bounds_min_corner_y_m: 0.0
      workspace_bounds_min_height_m: -0.5
      workspace_bounds_max_corner_x_m: 0.0
      workspace_bounds_max_corner_y_m: 0.0
      workspace_bounds_max_height_m: 2.0

      # mesh integrator
      mesh_integrator_min_weight: 0.1
      mesh_integrator_weld_vertices: true
      # tsdf decay integrator
      tsdf_decay_factor: 0.95
      exclude_last_view_from_decay: true
      # mesh streamer
      mesh_streamer_exclusion_height_m: 3.0
      mesh_streamer_exclusion_radius_m: 10.0
      mesh_bandwidth_limit_mbps: 999999925.0 # in mega bits per second```

Hi @vishrut1

Which NVIDIA Jetson are you using for your setup?

You can check with jetson-stats 4.3.1 the usage of the GPU to figure out if you have enough power to run all these nodes.

Best,
Raffaello

I am using Nvidia AGX Orin Dev Kit. Is it because of compute limitation or my params not configured correctly leading to latency and delayed cost creation-clearing.

Are you using 2 cameras on different sides?

I would suggest to use RGB image /rgb/image_rect_color image instead of /left/image_rect_color which is RGBA image for ZED Cameras.

Also, I would first start using 1 and creating a stable version before I switch to 2 camera setup. Does 1 camera work without problem?

If you use several nodes in the container, I would suggest to use

“component_container_mt” so it allows multi-threads.

The problem with zed cameras, the wrapper take them to CPU memory, so every image type you take from SDK must copy to GPU and it takes a lot performance.

That’s why I assume it is not supported with multi zed cameras for NVBLOX:

which I actually wanted to ask too to @Raffaello

What is the reason that ZED cameras doesn’t support NVBLOX with multi-camera setup or any of People Segmentation, Detection or Dynamics mode?

Thanks for your input!