General NITROS questions

I’m relatively new to ROS2 and have been coming up to speed on learning NITROS. I’ve been able to
get GitHub - NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline: Hardware-accelerated ROS2 packages for camera image processing. running with a Realsense D435i camera on my Orin dev kit. I’ve also been able to get GitHub - NVIDIA-ISAAC-ROS/isaac_ros_image_segmentation: Hardware-accelerated, deep learned semantic image segmentation running using that same Realsense camera. I’ve got a few questions as a result of getting through those efforts that I’m hoping people here can answer:

  1. How customizable are GEMS? If an end-user wanted to craft their own unique hardware acceleration operation,
    is there any documentation available that describes how to write custom GEMS or is GEM development solely being done by NVIDIA?

  2. On a related note, I see yaml files for utilizing ‘gxf’ extensions. I’m thinking that ‘gxf’ is the same thing as ‘GEM’. Is this the case? If not, is there any documentation on these ‘gxf’ extensions and how they are used?

  3. The Isaac NITROS animated graphic here (GitHub - NVIDIA-ISAAC-ROS/isaac_ros_nitros: NVIDIA Isaac Transport for ROS package for hardware-acceleration friendly movement of messages) shows “tickets” (handles?) being passed from one node to the next, with the data presumably staying on the GPU without having to be serialized / deserialized by ROS2. I know that ROS2 “type adaptation” is going on to transform ROS2 messages to NITROS messages (e.g. sensor_msgs::msg::Image → nvidia::isaac_ros::nitros:NitroImage). How are these “tickets” being passed from one NITROS node to the next? This question really relates to understanding whether GEMs are customizable by an end-user, and to what extent one needs to understand the underlying
    NITROS implementation.

  4. I understand that operating inside the same process provides zero-copy capabilities with NITROS. I assume this means that if I have two NITROS nodes operating in different processes, data that gets passed between them will suffer a serialize/deserialize penalty as things go from NITROS → ROS2 → NITROS in the transfer. Is this correct?

  5. If you are dealing with custom ROS2 messages that are not listed in the “NITROS Data Types” listed here (GitHub - NVIDIA-ISAAC-ROS/isaac_ros_nitros: NVIDIA Isaac Transport for ROS package for hardware-acceleration friendly movement of messages), I assume that you will not be able to take advantage of the hardware acceleration & zero-copy NITROS provides. Is this correct?

Thanks.

  1. GEMs are hardware-accelerated robotics packages. The GEMs we provide in Isaac ROS are for the ROS2 ecosystem. You can of course write your own ROS2 packages that use CUDA, VPI, or other hardware acceleration API. The Isaac ROS GEMs are configurable through ROS parameters we expose as well as the source where available. GEMs working with NITROS is solely being done by NVIDIA at the moment while we scope out how to open the interfaces for others.

  2. “GXF” is internal framework that we use as part of NITROS. It is not the same thing as GEM, no. GXF is part of the scoping to make NITROS accessible to others.

  3. The “tickets” are passed by ROS2 rclcpp layer as pointers from one node to another. At present, you would have to know much about the underlying implementation to interact with NitrosImage and friends directly.

  4. Yes, that is correct. Once you need to send a message out-of-process, ROS2 will have to marshall/serialize the message (convert to ROS message from custom message) to send this through the RMW layer to the DDS layer which will then transport it to the next process.

  5. That is correct. We need to implement NITROS data types for each conversion we want to support in NITROS.

Hi, I have a question regarding GEMs working with NITROS.

How to run NITROS Bridge on a GEM to make it runnable on a ROS 1 Noetic stack in Jetson AGX Orin. As the NITROS bridge is not supported on Jetson, Is there an alternative way?

How easy is to make a GEM non-NITROS?

Thanks

Hi, I was faced with the same problem when I wanted to configure an image recognition system in my drone. I found the solution on this article (French).

Basically, you need to install Docker on the Jetson and configure it to recognize GPU capabilities using the NVIDIA Container Toolkit. Then create a Docker container based on a Noetic ROS image, integrating all the necessary dependencies for your GEM.

Examine the GEM for NITROS-specific dependencies and look for ROS-compatible alternatives or other libraries offering similar functionality, such as CUDA or OpenCV for GPU acceleration. Modify the GEM code to replace the NITROS calls with these alternatives, and test the modified GEM in your Docker container to ensure compatibility and performance with ROS Noetic on Jetson AGX Orin.

2 Likes

Hi @amu459, like @guillaumeserrand35 replied,

We made the package Isaac ROS NITROS Bridge — isaac_ros_docs documentation to transition from a ROS Noetic to ROS 2

All documentation for building your setup is exhaustive.

For any issue, let me know.

Best,
Raffaello

1 Like