Jetson Nano with ROS2 Foxy at Ubuntu 18.04 with CSI Camera?


I need to use a CSI camera with ROS2 Foxy on a Jetson Nano (developer kit) running Ubuntu 18.04 (it must be 18.04), but I’m uncertain if it’s possible. Has anyone tried this? There are a few tutorials out there but does anyone have experience with this?

It’s always possible to use Docker, I’ve seen many people write about it online but I’ve yet to have seen anyone try with a CSI camera. I’ve only seen USB-LIDAR and similar.

Grateful for any help!

Hi @gustav.a, what I would do in your case is to use one of these ros:foxy-pytorch-* containers:

And then install the ros_deep_learning package on top and use the video_source node, which supports MIPI CSI camera:

The reason for starting with the ros:foxy-pytorch container as opposed to the ros:foxy-ros-base container, is that the foxy-pytorch container already has jetson-inference installed which makes it easy for you to install ros_deep_learning like this:

Alright, but what if I am not planning on implementing deep learning? What would be the difference to going to your repository, downloading the container and executing $ ./scripts/ --distro foxy and getting it that way? Maybe adding --with-pytorch if that makes life easier in terms of installing?

This is the camera that will be used.

Edit: Added link to the camera

It just so happens that the ROS/ROS2 MIPI CSI camera node (which is implemented using the videoSource interface from jetson-utils) is bundled in the same project as the deep learning nodes from jetson-inference.

If you are comfortable with writing your own ROS nodes, then you could write your own node around MIPI CSI camera libraries that have less dependencies to install, like jetcam:

Alternatively, you may want to test if your MIPI CSI camera also creates a /dev/video* node (some do), in which case you could just use an existing ROS node like v4l2_camera to read it.

I’ve tried to soak this in for days but I’m still unsure about how everything would pan out.

I can almost see how to do the first step with Docker, just navigate to a reasonable path like a new folder in ~/ros2_dev and do:

docker pull dustynv/ros:foxy-pytorch-l4t-r34.1.1

Edit#1: I ran jetsonUtilities and found out that L4T is 32.7.1 [JetPack UNKNOWN]. Will the above version still work, or do I need to install the r32.7.1 container? (foxy)

I guess the image is then “present” on the Jetson Nano, but not “opened”? After that, would something like this be correct?

docker run -it --name master dustynv/ros:foxy-pytorch-l4t-r34.1.1

Check the list of all (active?) images and get the container ID:

docker ps -a

Open a new terminal and do:

docker exec -it <container ID> bash

That terminal is then inside of the container i suppose? A terminal with “access” to ros2?

The next step with installing the ros_deep_learning package “on top” is however also something I am unsure about. The installation section calls for installation of the jetson-inference, but where will that be installed? I assume a terminal that has the Docker container opened so that it actually can be accessed by ros2, or is it in a “regular” terminal on the Nano? Aka, in what terminal is this supposed to be executed:

`$ cd ~

$ sudo apt-get install git cmake

$ git clone --recursive GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

$ cd jetson-inference

$ mkdir build

$ cd build

$ cmake …/

$ make -j$(nproc)

$ sudo make install

$ sudo ldconfig`

Maybe I’ve gotten confused by Docker, but I view containers as completely independent- and isolated environments from the “regular” environment on the host computer.

It also says that either Melodic or Eloquent ros2 is needed, but I assume Foxy works as well since you recommended this.

Since this:

$ cd ~/ros_workspace/src $ git clone

requires you to navigate to the ros2 workspace, I assume this has to be done in a terminal that has “opened” the Docker container?

Thanks for your help so far!

Edit: See Edit#1


The last step when
cd ${WORKSPACE_ROOT}/src &&
git clone GitHub - dusty-nv/ros_deep_learning: Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT &&
cd …/ &&
colcon build --symlink-install --event-handlers console_direct+

I assume this can just be executed in a terminal (normal or Docker?) except for actually writing “RUN”, "{WORKSPACE_ROOT} etc? Aka cloning the repository to src and then colcon build --symlink-install?

Hi @gustav.a, apologies for the delay - since your L4T version is 32.7.1, you would want to run dustynv/ros:foxy-pytorch-l4t-r32.7.1. And you will want to start it with --runtime nvidia flag to enable GPU acceleration.

jetson-inference is actually already installed in my ros:foxy-pytorch containers, so you don’t need to work about that, just the ros_deep_learning package itself.

Yes you are correct, run these in a docker terminal (except omit RUN and replace those environment variables)

No worries for the delay! I actually have everything running. After some camera driver issues I went to JP4.6 and installed your foxy-pytorch for 32.6.1. I probably should have closed this thread since the camera (and its driver) is currently the thing limiting me. Thanks for the help though!

OK great! - glad that you were able to get it working :)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.