Error running isaac_ros_visual_slam quickstart tutorial

I am new to isaac_ros and trying quickstart tutorial of isaac_ros_visual_slam — isaac_ros_docs documentation.

Please see jtop info for my system info:

While following RosBag section of tutorial, I launched this launch file:

ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=visual_slam \
interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_visual_slam/quickstart_interface_specs.json \
rectified_images:=false

Terminal output:

[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2025-11-18-14-07-15-758962-ubuntu-8895
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container_mt-1]: process started with pid [8906]
[component_container_mt-1] [INFO] [1763455036.185478595] [isaac_ros_examples.container]: Load Library: /opt/ros/humble/lib/libimage_format_converter_node.so
[component_container_mt-1] [INFO] [1763455036.248066252] [isaac_ros_examples.container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::image_proc::ImageFormatConverterNode>
[component_container_mt-1] [INFO] [1763455036.248212529] [isaac_ros_examples.container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::image_proc::ImageFormatConverterNode>
[component_container_mt-1] [ERROR] [1763455036.255973520] [NitrosContext]: cudaErrorNotSupported (operation not supported)
[component_container_mt-1] [ERROR] [1763455036.256034578] [NitrosContext]: [NitrosContext] setCUDAMemoryPoolSize Error: GXF_FAILURE
[component_container_mt-1] [INFO] [1763455036.256052914] [image_format_node_left]: [NitrosNode] Initializing NitrosNode
[component_container_mt-1] [ERROR] [1763455036.256182646] [NitrosContext]: [NitrosContext] GxfSetSeverity Error: GXF_CONTEXT_INVALID
[component_container_mt-1] [INFO] [1763455036.256400380] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/std/libgxf_std.so
[component_container_mt-1] [INFO] [1763455036.258485016] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_isaac_gxf_helpers.so
[component_container_mt-1] [INFO] [1763455036.265336765] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_isaac_sight.so
[component_container_mt-1] [INFO] [1763455036.272990074] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_isaac_atlas.so
[component_container_mt-1] 2025-11-18 14:07:16.290 WARN  gxf/std/program.cpp@538: No GXF scheduler specified.
[component_container_mt-1] [INFO] [1763455036.291319273] [image_format_node_left]: [ImageFormatConverterNode] Set output data format to: "nitros_image_mono8"
[component_container_mt-1] [INFO] [1763455036.291479182] [image_format_node_left]: [NitrosNode] Starting NitrosNode
[component_container_mt-1] [INFO] [1763455036.307159473] [image_format_node_left]: [NitrosNode] Loading extensions
[component_container_mt-1] [INFO] [1763455036.307387671] [image_format_node_left]: [NitrosContext] Loading extension: gxf/lib/multimedia/libgxf_multimedia.so
[component_container_mt-1] [ERROR] [1763455036.307433113] [image_format_node_left]: [NitrosContext] GxfLoadExtensions Error: GXF_CONTEXT_INVALID
[component_container_mt-1] [ERROR] [1763455036.307458329] [image_format_node_left]: [NitrosNode] loadExtensions Error: GXF_CONTEXT_INVALID
[component_container_mt-1] [INFO] [1763455036.307620542] [image_format_node_left]: [NitrosNode] Terminating the running application
[component_container_mt-1] [INFO] [1763455036.307654047] [image_format_node_left]: [NitrosContext] Interrupting GXF...
[component_container_mt-1] [ERROR] [1763455036.307677216] [image_format_node_left]: [NitrosContext] GxfGraphInterrupt Error: GXF_CONTEXT_INVALID
[component_container_mt-1] [INFO] [1763455036.307691648] [image_format_node_left]: [NitrosContext] Waiting on GXF...
[component_container_mt-1] [ERROR] [1763455036.307708129] [image_format_node_left]: [NitrosContext] GxfGraphWait Error: GXF_CONTEXT_INVALID
[component_container_mt-1] [INFO] [1763455036.307721217] [image_format_node_left]: [NitrosNode] Application termination done
[component_container_mt-1] [ERROR] [1763455036.321600592] [isaac_ros_examples.container]: Component constructor threw an exception: [NitrosNode] loadExtensions Error: GXF_CONTEXT_INVALID
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'image_format_node_left' of type 'nvidia::isaac_ros::image_proc::ImageFormatConverterNode' in container '/isaac_ros_examples/container': Component constructor threw an exception: [NitrosNode] loadExtensions Error: GXF_CONTEXT_INVALID
[component_container_mt-1] [INFO] [1763455036.326732196] [isaac_ros_examples.container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::image_proc::ImageFormatConverterNode>
[component_container_mt-1] [INFO] [1763455036.326806054] [isaac_ros_examples.container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::image_proc::ImageFormatConverterNode>
[component_container_mt-1] [INFO] [1763455036.330622804] [image_format_node_right]: [NitrosNode] Initializing NitrosNode
[component_container_mt-1] [INFO] [1763455036.331135459] [image_format_node_right]: [ImageFormatConverterNode] Set output data format to: "nitros_image_mono8"
[component_container_mt-1] [INFO] [1763455036.331265287] [image_format_node_right]: [NitrosNode] Starting NitrosNode
[component_container_mt-1] [INFO] [1763455036.348351378] [image_format_node_right]: [NitrosNode] Loading extensions
[component_container_mt-1] [INFO] [1763455036.348549432] [image_format_node_right]: [NitrosContext] Loading extension: gxf/lib/libgxf_isaac_message_compositor.so
[component_container_mt-1] [INFO] [1763455036.349427185] [image_format_node_right]: [NitrosContext] Loading extension: gxf/lib/cuda/libgxf_cuda.so
[component_container_mt-1] [INFO] [1763455036.354515396] [image_format_node_right]: [NitrosContext] Loading extension: gxf/lib/libgxf_isaac_tensorops.so
[component_container_mt-1] [INFO] [1763455036.361348136] [image_format_node_right]: [NitrosNode] Loading graph to the optimizer
[component_container_mt-1] [INFO] [1763455036.363782094] [image_format_node_right]: [NitrosNode] Running optimization
[component_container_mt-1] [INFO] [1763455036.451922710] [image_format_node_right]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container_mt-1] [INFO] [1763455036.468044454] [image_format_node_right]: [NitrosPublisherSubscriberGroup] Pinning the component "sink/sink" (type="nvidia::isaac_ros::MessageRelay") to use its compatible format only: "nitros_image_mono8"
[component_container_mt-1] [INFO] [1763455036.472023097] [image_format_node_right]: [NitrosNode] Starting negotiation...
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/image_format_node_right' in container '/isaac_ros_examples/container'
[component_container_mt-1] [INFO] [1763455036.476026892] [isaac_ros_examples.container]: Load Library: /opt/ros/humble/lib/libvisual_slam_node.so
[component_container_mt-1] [INFO] [1763455036.527352017] [isaac_ros_examples.container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::visual_slam::VisualSlamNode>
[component_container_mt-1] [INFO] [1763455036.527500853] [isaac_ros_examples.container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::visual_slam::VisualSlamNode>
[component_container_mt-1] [INFO] [1763455036.536284498] [visual_slam_node.ManagedNitrosSubscriber]: Starting Managed Nitros Subscriber
[component_container_mt-1] [INFO] [1763455036.537912577] [visual_slam_node.ManagedNitrosSubscriber]: Starting Managed Nitros Subscriber
[component_container_mt-1] [INFO] [1763455036.538032196] [image_format_node_right]: Negotiating
[component_container_mt-1] [INFO] [1763455036.538113287] [image_format_node_right]: Could not negotiate
[component_container_mt-1] [INFO] [1763455036.558433999] [visual_slam_node]: cuVSLAM version: 12.6
[component_container_mt-1] [INFO] [1763455036.560365735] [visual_slam_node]: Time taken by CUVSLAM_WarmUpGPU(): 0.001827
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/visual_slam_node' in container '/isaac_ros_examples/container'
[component_container_mt-1] [INFO] [1763455037.473229344] [image_format_node_right]: [NitrosNode] Starting post negotiation setup
[component_container_mt-1] [INFO] [1763455037.473376964] [image_format_node_right]: [NitrosNode] Getting data format negotiation results
[component_container_mt-1] [INFO] [1763455037.473431749] [image_format_node_right]: [NitrosPublisher] Negotiation ended with no results
[component_container_mt-1] [INFO] [1763455037.473470886] [image_format_node_right]: [NitrosPublisher] Use only the compatible publisher: topic_name="/right/image_rect_mono", data_format="nitros_image_mono8"
[component_container_mt-1] [INFO] [1763455037.473510184] [image_format_node_right]: [NitrosSubscriber] Negotiation ended with no results
[component_container_mt-1] [INFO] [1763455037.473539272] [image_format_node_right]: [NitrosSubscriber] Use the compatible subscriber: topic_name="/right/image_rect", data_format="nitros_image_rgb8"
[component_container_mt-1] [INFO] [1763455037.474009814] [image_format_node_right]: [NitrosNode] Exporting the final graph based on the negotiation results
[component_container_mt-1] [INFO] [1763455037.505921484] [image_format_node_right]: [NitrosNode] Wrote the final top level YAML graph to "/tmp/isaac_ros_nitros/graphs/PMMAKWBSZX/PMMAKWBSZX.yaml"
[component_container_mt-1] [INFO] [1763455037.506077361] [image_format_node_right]: [NitrosNode] Loading application
[component_container_mt-1] [INFO] [1763455037.510095684] [image_format_node_right]: [ImageFormatConverterNode] postLoadGraphCallback().
[component_container_mt-1] [INFO] [1763455037.510325643] [image_format_node_right]: [NitrosNode] Initializing and running GXF graph
[component_container_mt-1] 2025-11-18 14:07:17.511 ERROR gxf/std/block_memory_pool.cpp@77: Failure in cudaMalloc. cuda_error: cudaErrorNotSupported, error_str: operation not supported
[component_container_mt-1] 2025-11-18 14:07:17.511 ERROR gxf/std/entity_warden.cpp@548: Failed to initialize component 00022 (pool)
[component_container_mt-1] 2025-11-18 14:07:17.511 ERROR gxf/core/runtime.cpp@742: Could not initialize entity 'PMMAKWBSZX_imageConverter' (E17): GXF_OUT_OF_MEMORY
[component_container_mt-1] 2025-11-18 14:07:17.511 ERROR gxf/std/program.cpp@289: Failed to activate entity 00017 named PMMAKWBSZX_imageConverter: GXF_OUT_OF_MEMORY
[component_container_mt-1] 2025-11-18 14:07:17.511 ERROR gxf/std/program.cpp@291: Deactivating...
[component_container_mt-1] 2025-11-18 14:07:17.511 ERROR gxf/core/runtime.cpp@1625: Graph activation failed with error: GXF_OUT_OF_MEMORY
[component_container_mt-1] [ERROR] [1763455037.511589551] [image_format_node_right]: [NitrosContext] GxfGraphActivate Error: GXF_OUT_OF_MEMORY
[component_container_mt-1] [ERROR] [1763455037.511683506] [image_format_node_right]: [NitrosNode] runGraphAsync Error: GXF_OUT_OF_MEMORY
[component_container_mt-1] terminate called after throwing an instance of 'std::runtime_error'
[component_container_mt-1]   what():  [NitrosNode] runGraphAsync Error: GXF_OUT_OF_MEMORY
[ERROR] [component_container_mt-1]: process has died [pid 8906, exit code -6, cmd '/opt/ros/humble/lib/rclcpp_components/component_container_mt --ros-args -r __node:=container -r __ns:=/isaac_ros_examples'].

Most errors has one term common that is GXF, what is it?
What is main thing behind this error and how to solve it?

Thanks in advance.

Hello @adityap1,

Welcome to the Isaac ROS Forum!

Based on your system information, JetPack appears to be missing from your setup. Please install the correct version of JetPack for your hardware, following this page.

The CUDA memory allocation failure error you are seeing is typically a result of missing GPU, lack of memory, or a configuration issue, indicating the Isaac ROS node cannot properly initialize or access the GPU resources.
Please install JetPack first to resolve the fundamental compatibility issues.

Hi @vchuang ,

As you suggested I reinstalled jetpack 6.2 by sdk manager method. Then I check jtop and again got this error that jetpack is missing.

Then I found that my jtop(4.3.2) version has some issues and I followed this:

Then after reboot I checked again and get same error. Though this command shows me this result:

agv@ubuntu:~$ apt-cache show nvidia-jetpack

Package: nvidia-jetpack
Source: nvidia-jetpack (6.2.1)
Version: 6.2.1+b38
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.2.1+b38), nvidia-jetpack-dev (= 6.2.1+b38)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.2.1+b38_arm64.deb
Size: 29300
SHA256: dd9cb893fbe7f80d2c2348b268f17c8140b18b9dbb674fa8d79facfaa2050c53
SHA1: dc630f213f9afcb6f67c65234df7ad5c019edb9c
MD5sum: 9c8dc61bdab2b816dcc7cd253bcf6482
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Source: nvidia-jetpack (6.2)
Version: 6.2+b77
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.2+b77), nvidia-jetpack-dev (= 6.2+b77)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.2+b77_arm64.deb
Size: 29298
SHA256: 70553d4b5a802057f9436677ef8ce255db386fd3b5d24ff2c0a8ec0e485c59cd
SHA1: 9deab64d12eef0e788471e05856c84bf2a0cf6e6
MD5sum: 4db65dc36434fe1f84176843384aee23
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Source: nvidia-jetpack (6.1)
Version: 6.1+b123
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.1+b123), nvidia-jetpack-dev (= 6.1+b123)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.1+b123_arm64.deb
Size: 29312
SHA256: b6475a6108aeabc5b16af7c102162b7c46c36361239fef6293535d05ee2c2929
SHA1: f0984a6272c8f3a70ae14cb2ca6716b8c1a09543
MD5sum: a167745e1d88a8d7597454c8003fa9a4
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Is that mean Jetpack 6.2 is already installed? I have done same thing in previous flash of jetpack with exact same apt-cache for nividia-jetpack. What could be reason?

Yes, it seems like Jetpack is already installed on your system, and you could use this command to verify it. “dpkg -l | grep nvidia-jetpack”.
Please see the following post for your reference: 263150.

This command output empty result but apt list shows that nvidia-jetpack is installed:

agv@ubuntu:~$ dpkg -l | grep nvidia-jetpack
agv@ubuntu:~$ dpkg -l | grep nvidia-jetpack
agv@ubuntu:~$ sudo apt list | grep nvidia-jetpack
[sudo] password for agv: 

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

nvidia-jetpack-dev/stable 6.2.1+b38 arm64
nvidia-jetpack-runtime/stable 6.2.1+b38 arm64
nvidia-jetpack/stable 6.2.1+b38 arm64

So I am assuming it is correctly installed. Now how to resolve the error I am getting when running isaac-ros-vslam package?

The issue in your log is a failure to allocate memory on the GPU when attempting to start the ImageFormatConverterNode, causing the node to crash and the launch to fail.
You could use the command nvidia-smi on the host system to check the Memory-Usage before launching the container. If the available memory is low, try a fresh reboot or stop non-essential processes.

If you are using a camera or a large rosbag, reduce the image resolution (e.g., from 1280x720 to 640x480) and try again. This will dramatically lower the memory requirements.

See I am just following your isaac_ros tutorial into jetson orin nano with all prerequistite fullfiled. But still I am getting these fundamental gpu memory issue. Why?

Same question asked to chatgpt I got this summary of response:

" You were running Isaac ROS Visual SLAM inside the ros2_humble.realsense container on a Jetson Orin and kept getting GPU-related failures such as cudaErrorNotSupported, GXF_CONTEXT_INVALID, and GXF_OUT_OF_MEMORY. The underlying cause is that the Isaac ROS development containers are built for x86_64 Ubuntu, not for Jetson’s ARM64 L4T environment, and therefore they do not include the Jetson-specific CUDA/TensorRT/L4T userspace libraries that are required to access the integrated GPU. Even with the NVIDIA container runtime enabled, these containers cannot interface with Jetson’s nvGPU driver stack, which leads to all CUDA allocations and GXF graph operations failing. The issue is not with your hardware or configuration—it’s because the container simply doesn’t match the Jetson platform."

Please confirm that these containers are also made for tegra chips and they are ready to use directly.

Please provide me tutorial to use isaac_ros without any docker containers by native build of those packages.

Thank you.

Hello @adityap1,

Thank you for your detailed feedback. To clarify, our documentation includes separate sections for both x86 platforms and Jetson platforms. When setting up the Isaac ROS environment, please ensure you are following the instructions specific to Jetson, as the commands and requirements differ.
Currently, we officially support running Isaac ROS inside containers, and our documentation reflects this approach. The tutorial for native installation (without Docker) is not included in our document at this time.

Hi @vchuang ,

I only followed instruction specific to Jetson. Can you guys replicate this scenario by testing and running these pacakges into jetson orin 8 gb module directly. I would really appreciate that as I really wanted to try out packages crafted by you guys.

Thank you.

Hello @adityap1,

Could you please let us know if you saw any error logs when running the following command to launch the Docker container?

cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
./scripts/run_dev.sh

If you can provide the error log, it will be easier for us to assist you if the issue is related to the package.

For reference, I have installed the isaac_ros_visual_slam package on a Jetson NX 8GB following our documentation, and I am able to run rosbag without encountering the Gxf issue in the logs. However, please note that if you are using recent versions of Docker (19.03+), you might need to replace --runtime nvidia with --gpus all in the run_dev.sh script. After making this change, I was able to run the example successfully.

Please try this modification and let me know if it resolves your problem.

Hi @vchuang ,

Let me give you my docker configuration.

This is custom Dockerfile created called Dockerfile.mine:

ARG BASE_IMAGE=nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c
FROM ${BASE_IMAGE}

ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
ENV FASTRTPS_DEFAULT_PROFILES_FILE=""

RUN python3 -m pip install --upgrade pip && pip install pymodbus numpy pyserial crcmod

RUN apt-get update && apt-get install -y libmodbus-dev

This is .isaac_ros_common-config created in home directory:

CONFIG_IMAGE_KEY=ros2_humble.realsense.mine
CONFIG_DOCKER_SEARCH_DIRS=(/home/agv/workspaces/isaac_ros_agv/src/isaac_ros_common/docker )
CONFIG_CONTAINER_NAME_SUFFIX=mark2
BASE_DOCKER_REGISTRY_NAMES=("isaac_ros_dev-aarch64" "nvcr.io/nvidia/isaac/ros")

And this is what I got after running run_dev.sh script :

agv@ubuntu:~/workspaces/isaac_ros_agv$ cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
./scripts/run_dev.sh
Launching Isaac ROS Dev container with image key aarch64.ros2_humble.realsense.mine: /home/agv/workspaces/isaac_ros_agv
Building aarch64.ros2_humble.realsense.mine base as image: isaac_ros_dev-aarch64-mark2
Building layered image for key aarch64.ros2_humble.realsense.mine as isaac_ros_dev-aarch64-mark2
Using configured docker search paths: /home/agv/workspaces/isaac_ros_agv/src/isaac_ros_common/docker /home/agv/workspaces/isaac_ros_agv/src/isaac_ros_common/scripts/../docker
Checking if base image isaac_ros_dev-aarch64:aarch64-ros2_humble-realsense-mine_3fde73b4df7f40e11705f1be00626f32 exists on remote registry
Checking if base image nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble-realsense-mine_3fde73b4df7f40e11705f1be00626f32 exists on remote registry
Checking if base image isaac_ros_dev-aarch64:aarch64-ros2_humble-realsense_0c5ed46be508cacd00f551276fb6125e exists on remote registry
Checking if base image nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble-realsense_0c5ed46be508cacd00f551276fb6125e exists on remote registry
Checking if base image isaac_ros_dev-aarch64:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c exists on remote registry
Checking if base image nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c exists on remote registry
Found pre-built base image: nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c
aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c: Pulling from nvidia/isaac/ros
Digest: sha256:dd032d9aa0a4647460ab83dac734a155234a66fabeb80f9c659e7d4542a1ac94
Status: Image is up to date for nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c
nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c
Finished pulling pre-built base image: nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c
Resolved the following 2 Dockerfiles for target image: aarch64.ros2_humble.realsense.mine
/home/agv/workspaces/isaac_ros_agv/src/isaac_ros_common/docker/Dockerfile.mine
/home/agv/workspaces/isaac_ros_agv/src/isaac_ros_common/docker/Dockerfile.realsense
Building /home/agv/workspaces/isaac_ros_agv/src/isaac_ros_common/docker/Dockerfile.realsense as image: realsense-image with base: nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c
[+] Building 3.6s (13/13) FINISHED                                                                                                                         docker:default
 => [internal] load build definition from Dockerfile.realsense                                                                                                       0.0s
 => => transferring dockerfile: 2.42kB                                                                                                                               0.0s
 => [internal] load metadata for nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c                                                       0.0s
 => [internal] load .dockerignore                                                                                                                                    0.0s
 => => transferring context: 2B                                                                                                                                      0.0s
 => [internal] load build context                                                                                                                                    0.0s
 => => transferring context: 280B                                                                                                                                    0.0s
 => [stage-0 1/8] FROM nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c@sha256:dd032d9aa0a4647460ab83dac734a155234a66fabeb80f9c659e7d4  0.1s
 => => resolve nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_4c0c55dddd2bbcc3e8d5f9753bee634c@sha256:dd032d9aa0a4647460ab83dac734a155234a66fabeb80f9c659e7d4542a1ac9  0.1s
 => CACHED [stage-0 2/8] COPY scripts/build-librealsense.sh /opt/realsense/build-librealsense.sh                                                                     0.0s
 => CACHED [stage-0 3/8] COPY scripts/install-realsense-dependencies.sh /opt/realsense/install-realsense-dependencies.sh                                             0.0s
 => CACHED [stage-0 4/8] RUN chmod +x /opt/realsense/install-realsense-dependencies.sh &&     /opt/realsense/install-realsense-dependencies.sh;     chmod +x /opt/r  0.0s
 => CACHED [stage-0 5/8] RUN mkdir -p /opt/realsense/                                                                                                                0.0s
 => CACHED [stage-0 6/8] COPY scripts/hotplug-realsense.sh /opt/realsense/hotplug-realsense.sh                                                                       0.0s
 => CACHED [stage-0 7/8] COPY udev_rules/99-realsense-libusb-custom.rules /etc/udev/rules.d/99-realsense-libusb-custom.rules                                         0.0s
 => CACHED [stage-0 8/8] RUN --mount=type=cache,target=/var/cache/apt     mkdir -p /opt/ros/humble/src && cd /opt/ros/humble/src     && git clone https://github.co  0.1s
 => exporting to image                                                                                                                                               0.7s
 => => exporting layers                                                                                                                                              0.1s
 => => exporting manifest sha256:ce81a30217852e8e2a27215e37e86708c9c1e3e01d997b2f12cce462f7e47466                                                                    0.0s
 => => exporting config sha256:a83db908fe00de0e4b911052e3137f1b933957a1653a75479d5ec94c9f15a358                                                                      0.0s
 => => exporting attestation manifest sha256:c276fe2c02b871d22a0d4c9fec5c31a8153003f8a4a32c74a24f722b6928ed5d                                                        0.0s
 => => exporting manifest list sha256:228fc8f0ab6967839eae9445e4a414b82d262c037dc9d9d70ec64dd1746b0ae0                                                               0.0s
 => => naming to docker.io/library/realsense-image:latest                                                                                                            0.0s
 => => unpacking to docker.io/library/realsense-image:latest                                                                                                         0.3s
Building /home/agv/workspaces/isaac_ros_agv/src/isaac_ros_common/docker/Dockerfile.mine as image: isaac_ros_dev-aarch64-mark2 with base: realsense-image
[+] Building 1.8s (7/7) FINISHED                                                                                                                           docker:default
 => [internal] load build definition from Dockerfile.mine                                                                                                            0.0s
 => => transferring dockerfile: 381B                                                                                                                                 0.0s
 => [internal] load metadata for docker.io/library/realsense-image:latest                                                                                            0.0s
 => [internal] load .dockerignore                                                                                                                                    0.0s
 => => transferring context: 2B                                                                                                                                      0.0s
 => [1/3] FROM docker.io/library/realsense-image:latest@sha256:228fc8f0ab6967839eae9445e4a414b82d262c037dc9d9d70ec64dd1746b0ae0                                      0.1s
 => => resolve docker.io/library/realsense-image:latest@sha256:228fc8f0ab6967839eae9445e4a414b82d262c037dc9d9d70ec64dd1746b0ae0                                      0.1s
 => CACHED [2/3] RUN python3 -m pip install --upgrade pip && pip install pymodbus numpy pyserial crcmod                                                              0.0s
 => CACHED [3/3] RUN apt-get update && apt-get install -y libmodbus-dev                                                                                              0.1s
 => exporting to image                                                                                                                                               0.6s
 => => exporting layers                                                                                                                                              0.1s
 => => exporting manifest sha256:dac47d8e5821c3df10e81360c1f4879e92d4f55d9282b21a65d2c764df0acbbc                                                                    0.0s
 => => exporting config sha256:b855967c512bb511f926f82186083b4b0a7679dc3b7a9a4e3b4cf4bb2b2f3a2f                                                                      0.0s
 => => exporting attestation manifest sha256:74f2fe763d9a9b0994b826354e9e1546623250eb7e795886377be43c772d39fd                                                        0.0s
 => => exporting manifest list sha256:452bb1aecae2688e164a27f4d2b48828ddf3890a5a9278d6083c5d796ac63666                                                               0.0s
 => => naming to docker.io/library/isaac_ros_dev-aarch64-mark2:latest                                                                                                0.0s
 => => unpacking to docker.io/library/isaac_ros_dev-aarch64-mark2:latest                                                                                             0.2s
Using additional Docker run arguments from /home/agv/.isaac_ros_dev-dockerargs
Running isaac_ros_dev-aarch64-mark2-container
Creating non-root container 'admin' for host user uid=1000:gid=1000
 * Stopping hotplug events dispatcher systemd-udevd                                                                                                                [ OK ] 
 * Starting hotplug events dispatcher systemd-udevd                                                                                                                [ OK ] 
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

admin@ubuntu:/workspaces/isaac_ros-dev$ 

I don’t find any error when docker container is launched.(Note: please check last 6 to 7 lines of logs above if this is any serious.)

I also changed run_dev.sh script as you suggested (replacing ‘–nvidia runtime’ with ‘–gpus all’):

docker run -it --rm \
    --privileged \
    --network host \
    --ipc=host \
    ${DOCKER_ARGS[@]} \
    -v $ISAAC_ROS_DEV_DIR:/workspaces/isaac_ros-dev \
    -v /etc/localtime:/etc/localtime:ro \
    --name "$CONTAINER_NAME" \
    --gpus all \
    --entrypoint /usr/local/bin/scripts/workspace-entrypoint.sh \
    --workdir /workspaces/isaac_ros-dev \
    $BASE_NAME \
    /bin/bash

I am still getting same error after launching rosbag.

Hello @adityap1,

Please follow NVIDIA’s official Jetson setup steps first (Compute Setup → Jetson Platforms) to setup Docker, then verify CUDA visibility before running any Isaac ROS package.

Additionally, you can verify the libraries inside your container with the following quick checks to ensure CUDA is running correctly before starting any Isaac ROS package.

  • echo $NVIDIA_DRIVER_CAPABILITIES
  • ldconfig -p | grep libcudart
  • ls /usr/lib/aarch64-linux-gnu/nvidia | head

I have already done all the steps of docker configuration. And these are output of commands :

admin@ubuntu:/workspaces/isaac_ros-dev$ echo $NVIDIA_DRIVER_CAPABILITIES
all
admin@ubuntu:/workspaces/isaac_ros-dev$ 
admin@ubuntu:/workspaces/isaac_ros-dev$ 
admin@ubuntu:/workspaces/isaac_ros-dev$ ldconfig -p | grep libcudart
	libcudart.so.12 (libc6,AArch64) => /usr/local/cuda/targets/aarch64-linux/lib/libcudart.so.12
	libcudart.so (libc6,AArch64) => /usr/local/cuda/targets/aarch64-linux/lib/libcudart.so
admin@ubuntu:/workspaces/isaac_ros-dev$ 
admin@ubuntu:/workspaces/isaac_ros-dev$ 
admin@ubuntu:/workspaces/isaac_ros-dev$ 
admin@ubuntu:/workspaces/isaac_ros-dev$ ls /usr/lib/aarch64-linux-gnu/nvidia | head
ld.so.conf
libcuda.so
libcuda.so.1
libcuda.so.1.1
libGLX_indirect.so.0
libGLX_nvidia.so.0
libgstnvcustomhelper.so
libgstnvcustomhelper.so.1.0.0
libgstnvdsseimeta.so
libgstnvdsseimeta.so.1.0.0
admin@ubuntu:/workspaces/isaac_ros-dev$ 

What about the output of ldconfig -p | grep libcuda?
Also, what version of JetPack did you last install? Is it 6.2.1 or 6.2.0?
If you’re using 6.2.1 and would like to minimize the risk of potential issues like this gfx error, you might want to consider downgrading to 6.2.0 directly, as it’s the officially tested and qualified system.
Please check here.

1 Like
admin@ubuntu:/workspaces/isaac_ros-dev$ ldconfig -p | grep libcuda
	libcudart.so.12 (libc6,AArch64) => /usr/local/cuda/targets/aarch64-linux/lib/libcudart.so.12
	libcudart.so (libc6,AArch64) => /usr/local/cuda/targets/aarch64-linux/lib/libcudart.so
	libcudadebugger.so.1 (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/tegra/libcudadebugger.so.1
	libcuda.so.1 (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
	libcuda.so.1 (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1
	libcuda.so (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so
	libcuda.so (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libcuda.so
	libcuda.so (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/tegra/libcuda.so

I installed Jetpack 6.2.1(rev. 1). Do I really need to downgrade my Jetpack version? Did you install Jetpack 6.2(rev. 2) in your nvidia Jetson NX 8gb module?

I’ve run the same package on both JetPack environments on Jetson NX without issues. However, if you want to avoid unexpected problems, I recommend using the exact JetPack 6.2 version specified in the documentation, as that’s the one that has been verified.

Hi @vchuang ,

After downgrading Jetpack to 6.2, I got isaac_ros_visual_slam package working.

Thank you very much.

Thank you for sharing your results, and I’m glad to hear that it’s working for you!

1 Like