How to get setup DeepStream pipeline in my Nvidia orin nano?

• Hardware Platform (Jetson / GPU) Nvidia Orin Nano

• JetPack Version (valid for Jetson only) Jetpack 6

Hi everyone,
I am working on a personal project to build a drone. I am using the Nvidia orin nano for its main processing unit. I have a 2 cameras (1 facing the ground and another one facing horizontally) and a lidar sensor unit. I want to use DeepStream SDK to create a pipeline where I can stream the video data from the camera and stream it to a WebRTC server. And this video data needs to have an object detection capabilities as well. Can someone please tell me how I can get started from scratch?

  1. DeepStream is a SDK which is based on GStreamer: open source multimedia framework. The SDK provides Nvidia hardware accelerated DeepStream plugins, video inferencing and analytics APIs and lots of sample applications.
  2. There are also Jetson accelerated GStreamer plugins DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums which are specific for Jetson devices.
  3. Based on DeepStream plugins, APIs and the open source GStreamer plugins, APIs, you can implemented your own inferencing application for video and lidar data.

If your requirement is only a camera read and WebRTC streaming app, you may refer to the Jetson accelerated GStreamer plugins( Accelerated GStreamer — NVIDIA Jetson Linux Developer Guide 1 documentation) and GStreamer open source plugins ( webrtcsink (gstreamer.freedesktop.org)) .

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.