Unable to communicate to rplidar A1 and stereo camera to ros2 humble docker in Jetson Nano

Hi,
I’m working with a jetson nano with jetpack 4.6.1 with ubuntu 18.04, I’m using nvidia’s recommended docker for ros2 humble desktop.

I’m trying to use package for converting rplidar scan data and camara video into topics and I’m not being able. If there is a way that had worked for someone I would be happy if you share it with me.

The things that I tried are the following:

I had compiled sllidar_ros2 (GitHub - Slamtec/sllidar_ros2) pkg and v4l2_camera pkg ( GitHub - tier4/ros2_v4l2_camera: Forked from https://gitlab.com/boldhearts/ros2_v4l2_camera). I gave privilege to the serial ports that the jeston reads (dev/ttyUSB0 and dev/video0) and I can see them inside the docker using bash command “ls”.

But when I try to execute the sllidar node as :
ros2 launch sllidar_ros2 view_sllidar_a1_launch.py
It gives me this error:
[ERROR] [1696854026.307146787] [sllidar_node]: Error, operation time out. SL_RESULT_OPERATION_TIMEOUT!

I had checked in few post and any of the solutions provided helped me.

Then when I run the v4l2 nodes like:
ros2 run v4l2_camera v4l2_camera_node

It gives me this(what seems being working perfectly) :

[INFO] [1696859778.972843018] [v4l2_camera]: Driver: tegra-video
[INFO] [1696859778.973159781] [v4l2_camera]: Version: 264703
[INFO] [1696859778.973238425] [v4l2_camera]: Device: vi-output, imx219 7-0010
[INFO] [1696859778.973283945] [v4l2_camera]: Location: platform:54080000.vi:0
[INFO] [1696859778.973325402] [v4l2_camera]: Capabilities:
[INFO] [1696859778.973366391] [v4l2_camera]: Read/write: NO
[INFO] [1696859778.973417692] [v4l2_camera]: Streaming: YES
[INFO] [1696859778.973479565] [v4l2_camera]: Current pixel format: YUYV @ 3264x2464
[INFO] [1696859778.973670238] [v4l2_camera]: Available pixel formats:
[INFO] [1696859778.973717216] [v4l2_camera]: RG10 - 10-bit Bayer RGRG/GBGB
[INFO] [1696859778.973757788] [v4l2_camera]: Available controls:
[INFO] [1696859778.974774326] [v4l2_camera]: Requesting format: 640x480 YUYV
[INFO] [1696859778.974930520] [v4l2_camera]: Success
[INFO] [1696859778.975673835] [v4l2_camera]: Starting camera

but when I check for the topic /image_raw nothing is being published.

I don’t know if I’m making a mistake trying to read ports inside a docker or if I’m missing something.

@thiagoromero42 I can’t speak to rpLIDAR, but here you are trying to use MIPI CSI camera through V4L2 /dev/video, and that video will be pre-ISP (notice that the available output format is Bayer). And I’m not sure, but ROS’s built-in v4l2_camera node may not support raw bayer.

Instead, to debug it you could try USB webcam, and then use one of these packages for MIPI CSI camera acquisition node:

Hi, thanks for your reply!

I was following the argus_camera container instructions and when I reach the “colcon build --symlink-install” it throws an error:
CMake Error at CMakeLists.txt:22 (find_package):
By not providing “Findvpi.cmake” in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by “vpi”, but
CMake did not find one.

Could not find a package configuration file provided by “vpi” with any of
the following names:

vpiConfig.cmake
vpi-config.cmake

I’m following the instructions of the link you provided.

Ooo oops, I’m sorry - I forgot you were on Jetson Nano and JetPack 4, and Isaac ROS requires JetPack 5:

Instead, you could try the dustynv/ros:humble-pytorch-l4t-r32.7.1 container which has the ros_deep_learning package and video_source node in it.

I run this commands outside the container and the cameras work perfectly

gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! “video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1” ! nvvidconv ! xvimagesink sync=false

gst-launch-1.0 nvarguscamerasrc sensor-id=1 ! “video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1” ! nvvidconv ! xvimagesink sync=false

But when I run the container and execute :
video-viewer csi://1 output.mp4

I get the lines below, like it is not having anything at csi://1, do you how why could it be ?

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera – attempting to create device csi://1
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=1 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM) ! appsink name=mysink
(Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)
(Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
[gstreamer] gstCamera successfully created device csi://1
[video] created gstCamera from csi://1

gstCamera video options:

– URI: csi://1
- protocol: csi
- location: 1
- port: 1
– deviceType: csi
– ioType: input
– width: 1280
– height: 720
– frameRate: 30
– numBuffers: 4
– zeroCopy: true
– flipMethod: rotate-180

[gstreamer] gstEncoder – codec not specified, defaulting to H.264
[gstreamer] gstEncoder – pipeline launch string:
[gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! omxh264enc name=encoder bitrate=4000000 ! video/x-h264 ! h264parse ! qtmux ! filesink location=output.mp4
[video] created gstEncoder from file:///output.mp4

gstEncoder video options:

– URI: file:///output.mp4
- protocol: file
- location: output.mp4
- extension: mp4
– deviceType: file
– ioType: output
– codec: H264
– codecType: omx
– frameRate: 30
– bitRate: 4000000
– numBuffers: 4
– zeroCopy: true

[OpenGL] failed to open X11 server connection.
[OpenGL] failed to create X11 Window.
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:735 Failed to create CameraProvider
[gstreamer] gstCamera – end of stream (EOS)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[gstreamer] gstreamer pipeline0 recieved EOS signal…
[gstreamer] gstCamera::Capture() – a timeout occurred waiting for the next image buffer
^Creceived SIGINT
[gstreamer] gstCamera::Capture() – a timeout occurred waiting for the next image buffer
video-viewer: shutting down…

@thiagoromero42 does video-viewer work okay with csi://0, or is it just an issue with csi://1 channel?

The sensor ID is getting set correctly here in the log:

[gstreamer] nvarguscamerasrc sensor-id=1 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM) ! appsink name=mysink

so it must be some issue inside GStreamer or the argus daemon. How did you start the container? If you did it with your own invocation of docker run command, did you include the --volume /tmp/argus_socket:/tmp/argus_socket flag like here?

I run it like:
sudo docker run --runtime nvidia -it --rm --network=host --device=/dev/video0 --device=/dev/video1 dustynv/ros:humble-pytorch-l4t-r32.7.1

And yes, i have tried with csi://0 and csi://1, both of them gives the same error.

I should put this --volume /tmp/argus_socket:/tmp/argus_socket into my command ?

Ah okay yes, try:

sudo docker run --runtime nvidia -it --rm --network=host --device=/dev/video0 --device=/dev/video1 --volume /tmp/argus_socket:/tmp/argus_socket dustynv/ros:humble-pytorch-l4t-r32.7.1

If you want to use GUI from container, you also have to add additional mounts like here: https://github.com/dusty-nv/jetson-containers/blob/f46caf6843bf248258c91cf0c39bfdd3217d35fa/run.sh#L21

Thank you, I could save a video inside the container and then moving it outside, it worked perfectly.
Now I will try the ros2 nodes, hope that they work to :D

Can you tell me what should I add to sudo docker run --runtime nvidia -it --rm --network=host --device=/dev/video0 --device=/dev/video1 --volume /tmp/argus_socket:/tmp/argus_socket dustynv/ros:humble-pytorch-l4t-r32.7.1 so that I can run a GUI from the container ? and do you know if there is a docker with rviz2 and rqt included ?
sorry, I’m new in docker and this things.

@thiagoromero42 first you would need to run sudo xhost +si:localuser:root, then add -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix to your command line like this:

sudo docker run --runtime nvidia -it --rm --network=host --device=/dev/video0 --device=/dev/video1 --volume /tmp/argus_socket:/tmp/argus_socket -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix dustynv/ros:humble-pytorch-l4t-r32.7.1

I have dustynv/ros:humble-desktop-l4t-r32.7.1 but I don’t believe that includes ros_deep_learning package in it.

1 Like

Thank you very much for your help!

I’m sorry that I bother again. Is there a way to obtain camera_info topic using ros_deep_learning ?
I want to use stereo_image_proc and I think that is neccesary. I could do the calibration, but I need the camera_info topic to be publish.
Hope you can help me.

Hi @thiagoromero42, sorry for the delay - you would need to add camera_info topic to the video_source node, or otherwise create a publisher that loaded your calibration (like this one)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.