Deep learning inference on ROS2

We are currently using ROS2 on a Jetson Xavier NX. Our use-case requires a very low latency inference so that we can perform high speed physical actions based on the inference.

We are aware of and have made use of the ros_deep_learning nodes. It would be great to keep using it because of the node being written in C++ and its GPU image pre-processing (Bayer conversions etc), as this reduces the inference latency. This node requires a TensorRT engine. Some models, however, can not be optimised into a pure TensorRT engine due to some layers not being supported.

Is there currently a standard/preferred method of inferring against these models with low latency on ROS2?

Hi @sidney.rubidge, you can try this package for running models with PyTorch in ROS2:

There is also this ISAAC ROS package which can run models through Triton, which supports various model formats (including ONNX Runtime, TensorRT Engine Plan, TensorFlow, PyTorch)

Thank you @dusty_nv those repos look like they are going to be very useful.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.