Ros node for Arducam? won't work?

we tried ros1 jetson CSI node
but also ROS2 v4l2 camera node with v4l2loopback through emulated added virtual camera device
neither works
camera topics are silent
are there any solutions to use arducam on jetson within ROS1/ Ros2? Both? Any? Neither of the two?

Hi @Andrey1984, does your CSI camera work outside of ROS using nvgstcapture-1.0 or video-viewer from jetson-inference?

Hi, @dusty_nv
thank you for following up
we got nx jetson version from arducam, the one with AF function.
it works outside of ROS in two scenarios as for now

  1. if to loop it as v4l2 emulated webcamera
  2. If to use gstreamer nvargus src or if to use python scripts provided by arducam like camera preview, etc
    Which exactly nvgtscapture-1.0 command did you have in mind? Just execution of it without arguments runs something in terminal without graphical outputs. However, I did not try the video-viewer

What’s the gstreamer pipeline that works for it?

See if video-viewer works with it, because the video_source ROS node uses the same code underneath.

what is nvgstcapture pipeline that works for arducam imx477 jetson nx model?
will try video viewer next time

For MIPI CSI camera, nvgstcapture-1.0 should work without additional arguments. You would only see the window pop up if you have display attached.

could you extend which exactly file from jetson-inference is reffered to as video-viewer? is it binary? c/ python script?

etson-inference$ ls
c             CMakeLists.txt    docker      examples    python     utils
calibration   CMakePreBuild.sh  Dockerfile  LICENSE.md  README.md
CHANGELOG.md  data              docs        plugins     tools

seems jetson-utils/video-viewer.py at master · dusty-nv/jetson-utils · GitHub
from jetson-utils though
probably you meant jetson utils video viewer
yes, the video viewer shows the video output
However nvgstcapture withotu arguments won’t show the output
But the issue is that the ROS node won’t publish any information on camrea_info image_raw topics;
what will be the proper command to run CSI node for ROS2 foxy? for ROS 1 noetic with arducam?

 * /csi_cam_0/camera_name: csi_cam_0
 * /csi_cam_0/frame_id: /csi_cam_0_link
 * /csi_cam_0/image_height: 1080
 * /csi_cam_0/image_width: 1920
 * /csi_cam_0/sync_sink: True
 * /csi_cam_0/target_fps: 30
 * /rosdistro: noetic
 * /rosversion: 1.15.9

NODES
  /
    csi_cam_0 (gscam/gscam)

auto-starting new master
process[master]: started with pid [9172]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to 83075158-8c43-11eb-ba2f-e53e4da51ed6
process[rosout-1]: started with pid [9186]
started core service [/rosout]
process[csi_cam_0-2]: started with pid [9189]
[ WARN] [1616550718.411923318]: Camera calibration file /home/nx/.ros/camera_info/csi_cam_0.yaml not found.
~/catkin_ws$ rostopic list 
/csi_cam_0/camera_info
/csi_cam_0/image_raw
/rosout
/rosout_agg

 rostopic echo /csi_cam_0/camera_info

topics empty

gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! nvvidconv ! nvegltransform ! nveglglessink -e

this gstreamer pipeline works for showing outputs while nvgstcapture does only show terminal loutput. Moreover, the video-viewer also shows output while ros topic doesn’t show any output

/catkin_ws$ rostopic echo /csi_cam_0/image_raw


that is the situation - no information on topics

Are you using the video_source node from ros_deep_learning package or some other video node? ros_deep_learning is built on top of jetson-inference / jetson-utils. Since video-viewer is working for you, hopefully so will video_source node from ros_deep_learning.

@dusty_nv

ros2 launch ros_deep_learning imagenet.ros2.launch input:=csi://0 output:=display://0

does work showing the outputs

but how to run the node that will publish raw images? the video_source Node ?

If you are running it remotely, you won’t see the OpenGL window.

If you use the ros2 topic utility, you can confirm that the video_source node is publishing messages to the /raw topic. You can adapt the launch file or make your own so that it only launches the video_source node.

@dusty_nv
I can see the opengl window running remotely due to hdmi emulator attached
but we need two topics - raw, but also camera info to be feed into nvapriltags ros2 processing
I can see only

ros2 topic echo /video_source/raw

it has the data fortunately
if running the viewer.

How to run the node without the viewer? I understand that editing the launch file can omit the viewer, but how to add the camera_info topic?
the viewer only published these topics

ros2 topic list
/parameter_events
/rosout
/video_source/raw

but nvapriltags takes two inputs - image raw, but also camera_info


[WARN] [1616558595.601006044] [apriltag.apriltag]: [image_transport] Topics '/video_source/raw' and '/apriltag/camera_info' do not appear to be synchronized. In the last 10s:
[component_container-1] 	Image messages received:      15
[component_container-1] 	CameraInfo messages received: 0
[component_container-1] 	Synchronized pairs:           0

also is there a way to poass flip / upside down argument?
how to add camera_info?

You can add a flip string argument to the video_source node here: https://github.com/dusty-nv/ros_deep_learning/blob/ac40e93413f4b7cb911a18c0e4d5daac479234d4/src/node_video_source.cpp#L99

std::string flip_str;
ROS_DECLARE_PARAMETER("flip", flip_str);
ROS_GET_PARAMETER("flip", flip_str);

if( flip_str.size() != 0 )
     video_options.flipMethod = videoOptions::FlipMethodFromStr(flip_str)

You should then be able to set the flip ros param on the node in your launch file. Valid values for the flip strings are found here: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#input-options

I don’t publish camera_info topic, probably because I don’t have the intrinsic calibration data. However you could add it to the video_source node by modifying the source. These parts of the code are where the image publisher is initially advertised, and then where the image message is actually published:

1 Like

@dusty_nv
Thank you for pointing out where to putt the extra code
But it would require also to know which exactly code needs to be put to there for arducam IMX477, in order to add to this location, as it seems to me.

The camera_info message contains a lot of calibration parameters, so yes you would need to calibrate your camera in order to effectively fill out the message.