Adapting ROS Graph to use NITROS

I am trying to adapt an existing ROS graph to use NITROS for accelerated transport, connecting our camera node and our autonomy node. Our current workflow is that:In a composable container with zero-copy enabled:

  • Camera node uses OpenCV to read from an e-con GMSL camera via V4L2, and the images are published
  • Autonomy node reads those images and loads them into GPU and executes the model

We are trying to change this so that the Camera Node reads the image into GPU, which is then sent using NITROS to the autonomy node. When I read the source code of NITROS, and some of the isaac gems like isaac_ros_argus_camera, there is a lot of ‘magic’ going on, where there is nothing in the (or no) callbacks.
I also have been looking at jetson-utils and its ability to read a V4L2 camera and place it into GPU.Our question then is how to get started with adapting our existing nodes to use NITROS and if there are any relevant examples for us. I would also like to know how NITROS and jetson-utils relate to the Multimedia APIs. Thanks!

I have the same question. @hemals any ideas/pointers?

I see some discussion here: General NITROS questions

Let me see if I can break down the great pipeline you’re trying to setup here. Problem 1 is a camera node that reads directly into GPU. Problem 2 is sending the GPU camera image with zero-GPU copy to other nodes through NITROS.

The isaac_ros_argus_camera packages leverage [libArgus]( Jetson Linux API Reference: Libargus Camera API NVIDIA Docs https://docs.nvidia.com › jetson › group__LibargusAPI) APIs on Jetson with certified camera drivers to pull images off of a GMSL-connected camera directly in GPU. V4L2 and OpenCV in general are not mean to work with hardware-accelerators and GPU memory, however. I would recommend using Argus directly to achieve this unless somehow V4L2 is needed in your stack.

Problem 2 is where NITROS comes in. After you have the image in a GPU buffer inside of a ROS node, you want to send it to another ROS 2 node, possibly through DNN Image Encoder node and then on to TensorRT or Triton nodes for model inference. You can copy and modify the code from existing GXF-based code which is what NITROS is based upon (such as GXF extensions in isaac_ros_image_pipeline packages or others in a “gxf/” directory). We’re developing a “managed” NITROS interface where you bring your CUDA buffer and some metadata and we’ll make it NITROS-compatible for you which will make this much easier than having to figure out the GXF code.

This is great and thanks for your response!

  1. Is it possible to add custom GXF codelets in the argus pipeline to customize the caputured images? Say we want to capture images from N cameras and preprocess them and publish as a single image, should we create a new GXF pipeline for this?

  2. Any timeline on when would that managed NITROS interface support be available?

Thank you for the answer. Our current cameras are exposed through v4l2. We will ask the vendor for CSI support. Why does Jetson Linux API Reference: v4l2cuda (CUDA format conversion) | NVIDIA Docs indicate that v4l2->cuda is possible then? Just trying to understand the limitations.

Does the jetson-utils library make use of the feature referenced? It exposes the captured image in cuda memory.