ROS2 DeepStream Cpp

after the all new amazing news, what is the best practice(s) to use Jetson for Object Detection?
Let’s suggest the one has a ROS2 node of camera driver (RealSense/ZED/Mipi, doesn’t matter) and wants to connect to another node using shared memory for Object Detection? If the one wants the shared memory he should implement the ROS2 nodes in C++, am I wrong?
If so, what is the best way for infer? Do I need to convert the DeepStream node from Python to C++ or should I use some ROS2 nodes provided by NVIDIA such as ROS2 TORCH TRT?
Thank you in advance!

Hi @mark60, to utilize shared memory transport between ROS2 nodes and perform optimized DNN inferencing, I would recommend looking into Isaac ROS:

There are also these ROS/ROS2 nodes available for the jetson-inference library:

Hello @dusty_nv ,
Thank you for your response.
What you sent was the reason for my question. So, if I understand right, it’s time to say goodbye to Deepstream with ROS?
Do you still recommend using TAO?

Certainly you could still use DeepStream for optimized inference using TAO models with ROS2 if you made a node for it. Here is another thread about that.

Thank you for the link!
But what is the best practice supposed by you/NVIDIA?
I don’t want to continue into the dead end. If DeepStream is out, it’s ok for me, I guess that I can also convert etlt to TensorRT and run it with the ROS2 node.

Can you still use TAO with ROS2 inference nodes?
Is there some way to convert it to ONNX or do these nodes support .etlt?

Hi @mark60, yes you can use TAO-trained models with Isaac ROS (ROS2) - see the section titled DNN Inference Processing on this page:

It’s compatible only for JetPack 5.0.2, isn’t it?
I still have projects that use JetPack 4.6 because of carrier boards. Is there some solution for this?

OK, if you want to use TAO models in ROS2 on JetPack 4.x, I would try using the previous ros2_deepstream nodes then.

I use these nodes…
But I’m looking for something in cpp for fast data transfer, zero copy and etc.

ros2_deepstream does zero-copy within the camera/inferencing pipeline (because that’s all contained within the deepstream node and not broken up into separate ROS nodes). Once it publishes the detection metadata to the ROS topic however, that is no longer zero-copy (however it is not high-bandwidth data at that point)

For breaking up the camera/inferencing pipeline into multiple ROS nodes, Isaac ROS implements NITROS to achieve zero-copy in ROS2, however that would require upgrading your systems to JetPack 5.x

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.