Jetson Nano ROS2 publisher and image processing

Hi, I want to publish image via ros2 on jetson nano but I also would like to use it.
As far as I know, we can get img data from CSI cameras to publish directly.
But I also need to use it.

I cannot find on gitpages.

The question is: How can I get img data from jetson library and process image?
Also I would like to understand and know if I can process img while img is in jetson utils datatype (not numpy array)

I will use img in same node. I’ll publish image to use another devices.

thanks,

1 Like

Hi @muhammedsezer12, you can find ros2 package for CSI camera at https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_argus_camera

Also the one from jetson-inference / ros_deep_learning https://github.com/dusty-nv/ros_deep_learning#video_output-node

You can use it with the functions from jetson.utils / jetson.inference, or see here for documentation about the memory structure: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-image.md#image-capsules-in-python

1 Like

Hi, thanks.

Last question: is there any way to publish img from directly image capsule datatype? Without any conversation to opencv.

The ros_deep_learning package doesn’t use OpenCV for the image conversion (it uses CUDA). But there will always be the step of serializing the image into the ROS message.

Hi,

Real last question: how do you serialise img with your code?
How can i apply it on python?

Hi @muhammedsezer12, you can find the code for converting CUDA image to ROS image message here:

https://github.com/dusty-nv/ros_deep_learning/blob/2b9b61288f7a93e2bbdb3ccc450bcf34a6d4cf04/src/image_converter.cpp#L159

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.