Using ROS2-Humble on a Jetson with Hardware acceleration

Hey everyone,

I’m working on a university project where we have an RC car equipped with a NVIDIA Jetson and several sensors, including cameras. One of our main tasks is to do hardware-accelerated detection of a race track using ROS2 running inside a container on the Jetson.

Right now, we’re using a pretty standard ROS2 container, but to really get good performance with hardware acceleration, we want to switch to the dusty-nv (or any other, if you have tips) Jetson containers from NVIDIA because they have better GPU support and all the CUDA stuff built in.

The problem is, these dusty-nv containers don’t come with many ROS2 packages pre-installed — unlike the official ROS2 Ubuntu containers. That means a lot of the packages we need aren’t available via apt and we have to manually clone a ton of repos and build everything from source, which is super time-consuming and slows down development a lot.

So my challenge right now is figuring out how to keep the benefits of the dusty-nv container (hardware acceleration and CUDA) while still having all the ROS2 packages we need without spending hours cloning and building everything manually every time.

If anyone has experience with this setup or tips on managing ROS2 packages in these Jetson containers, I’d love to hear your thoughts! How do you handle this tradeoff between hardware acceleration support and package availability?
Or do i completely miss something obvious which could prevents these troubles?

Thanks in advance!

Hi, there is no easy way to combine the source build of ROS with their pre-built apt packages, it is either one way or the other. I stay from source because it becomes necessary often. I added helper scripts to jetson-containers which automatically add/compile additional ROS packages to the container and workspace, which are used for building Isaac packages and others in the repo. You can also use multiple containers with other ROS installs like from Isaac (currently for Humble on JP6) or non-GPU packages in upstream ROS. It does not really matter if every ROS node runs in its own container or not, but ROS developers tend to prefer monocontainer or no container. In reality they are microservices like otherized dockerized applications that run into build conflicts because of the increasing complexity of integrating AI systems.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.