cudaErrorNotSupported while running VSLAM and navigation stack

I am trying to run the Issac ros visual slam along with the nav2 in a robot where the Jetson Orin Nano is used as the brain.

Everything was working fine till yesterday when I start getting this error when I run the visual slam node and the navigation launch file:
[component_container_mt-2] [ERROR] [1732201056.959010475] [NitrosContext]: cudaErrorNotSupported (operation not supported)
[component_container_mt-2] [ERROR] [1732201056.959101123] [occupancy_grid_localizer]: [NitrosContext] setCUDAMemoryPoolSize Error: GXF_FAILURE
[component_container_mt-2] [ERROR] [1732201056.959199738] [occupancy_grid_localizer]: [NitrosNode] Failed to get the pointer of nvidia::gxf::DoubleBufferReceiver (flatscan_beams_tensor_copier/rx_flatscan) for linking a NitrosSubscriber: GXF_FAILURE
[component_container_mt-2] terminate called after throwing an instance of ‘std::runtime_error’
[component_container_mt-2] what(): [NitrosNode] Failed to get the pointer of nvidia::gxf::DoubleBufferReceiver (flatscan_beams_tensor_copier/rx_flatscan) for linking a NitrosSubscriber: GXF_FAILURE

Sensors used:
LIDAR and Camera

Camera Details:
Camera used : Intel Realsense D455
Firmware Version: 5.13.0.50

Jetpack Version: Jetpack 6
Isaac ROS version : 3.0

I am also adding the visual slam launch file.

# Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0

import launch
from launch_ros.actions import ComposableNodeContainer, Node
from launch_ros.descriptions import ComposableNode


def generate_launch_description():
    """Launch file which brings up visual slam node configured for RealSense."""
    realsense_camera_node = Node(
        name='camera',
        namespace='camera',
        package='realsense2_camera',
        executable='realsense2_camera_node',
        parameters=[{
                'enable_infra1': True,
                'enable_infra2': True,
                'enable_color': False,
                'enable_depth': False,
                'depth_module.emitter_enabled': 0,
                'depth_module.profile': '640x360x90',
                'enable_gyro': True,
                'enable_accel': True,
                'gyro_fps': 200,
                'accel_fps': 200,
                'unite_imu_method': 2
        }]
    )

    visual_slam_node = ComposableNode(
        name='visual_slam_node',
        package='isaac_ros_visual_slam',
        plugin='nvidia::isaac_ros::visual_slam::VisualSlamNode',
        parameters=[{
                    'enable_image_denoising': False,
                    'rectified_images': True,
                    'enable_imu_fusion': True,
                    'enable_localization_n_mapping': True,
                    'enable_ground_constraint_in_odometry': True,
                    'enable_ground_constraint_in_slam': True,
                    'publish_map_to_odom_tf': False, # this will be published by AMCL
                    'gyro_noise_density': 0.000244,
                    'gyro_random_walk': 0.000019393,
                    'accel_noise_density': 0.001862,
                    'accel_random_walk': 0.003,
                    'calibration_frequency': 200.0,
                    'image_jitter_threshold_ms': 22.00,
                    'base_frame': 'base_footprint',
                    'imu_frame': 'camera_gyro_optical_frame',
                    'enable_slam_visualization': True,
                    'enable_landmarks_view': True,
                    'enable_observations_view': True,
                    'camera_optical_frames': [
                        'camera_infra1_optical_frame',
                        'camera_infra2_optical_frame',
                    ],
                    }],
        remappings=[('visual_slam/image_0', 'camera/infra1/image_rect_raw'),
                    ('visual_slam/camera_info_0', 'camera/infra1/camera_info'),
                    ('visual_slam/image_1', 'camera/infra2/image_rect_raw'),
                    ('visual_slam/camera_info_1', 'camera/infra2/camera_info'),
                    ('visual_slam/imu', 'camera/imu')]
    )

    visual_slam_launch_container = ComposableNodeContainer(
        name='visual_slam_launch_container',
        namespace='',
        package='rclcpp_components',
        executable='component_container',
        composable_node_descriptions=[visual_slam_node],
        output='screen'
    )

    return launch.LaunchDescription([visual_slam_launch_container, realsense_camera_node]) 

Thanks a lot in advance

1 Like

Hi @pmoorthy

Looking at the type of error, please check if you are working with the correct version of Isaac ROS.

On Jetpack 6.0, only Isaac ROS 3.1 works: Getting Started — isaac_ros_docs documentation

Best,
Raffaello

Hi @Raffaello ,
I was able to figure out the error. It was because of a device permission I added to the docker for allowing LIDAR to work without using chmod. Once that was removed, I am not facing the cuda error.

Thanks a lot,
Regards,
Pranav Moorthy

1 Like

Good to hear!

I close this topic

Best,
Raffaello

Hi @Raffaello ,
When I checked the release notes for isaac ros 3.0, it said that Jetpack 6.0 was supported.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.