Deep stream in custom application

Hi all,
1- I want to know how the deep stream sdk can efficient for custom application, I know we can train the models with TLT on custom dataset and then deploy that model on deep stream, and that show me best result, but showing the results on screen isn’t enough in the business,maybe I want to crop the ROI and passed into another model, How flexible is it?

2- In my opinion, deep stream can’t efficient for custom business, is it possible to add this sdk into your project? If we want to when we see unknown object and the system active alarmed, is it possible? in my opinion, the deep stream sdk is only for to show the capability of that device not be expendable to custom project, right?

3- Suppose I trained a detection model (Face Detection) with TLT and deployed that model on deep stream and I want when the system see some people save some where, Is it possible in deep stream?

4- In the deep stream python apps, I see only ssdparser as detector, It is only supported that model? If I want to deploy detectnet_v2 detector Is it possible with python samples? If so, Is it work with ssdparserr sample?

5- Is it possible to use some plugins like tracker or decoder , … in custom python applications?

6- Some plugins used Hardware of jetson nano like decoder/encoder/scaling, … I want to know other plugins like tracker , … are done on CPU or has special hardware processing for that purpose?


We have experience working with clients that use DeepStream for real-world applications and I can tell you that thanks to the SDK being based on GStreamer, it is extremely flexible. We have been able to incorporate ROI with videoroi, save captures and videos, and stream with RTSP and WebRTC.

It is possible to incorporate DeepStream in your custom application. Handling events and alerts is also possible, we have even do it from a separate application running in the same device. The DeepStream metadata can be serialized into JSON and send with a signal or be read from a property.

Yes, this is possible. DeepStream 5.0 has a smart record example that only records the event when specific rules or conditions are met. But even in DS 4.0 we were able to do exactly the same with out-of-the-box GStreamer elemtns and meta parsing.

I haven’t worked with the Python API yet, so I don’t know for sure.

Yes, you can use virtually all GStreamer elements with DeepStream as long as you perform the right conversions

As far as I know, there is no specialized hardware for tracking, but most DeepStream elements use mainly GPU. DeepStream’s tracker element takes a binary as part of the configuration and DS 4.0 provides 3 options for this binary:

  1. IOU low load (CPU)
  2. KLT High load (CPU)
  3. NvDCF Medium (GPU) Low (CPU)

videoroi plugin exist in deep stream or is created in your company?
You manipulate the deepstream-app for doing that?

Sorry, for a moment I forgot that videoroi is the name we use for our ROI pipeline but not the actual name of the element. The elements used are videocrop which is in Gstreamer plugins good and nvvidconv which is an NVIDIA element for conversions that supports croping. Here are some example pipelines:

gst-launch-1.0 nvarguscamerasrc ! ""video/x-raw(memory:NVMM), width=3264, height=2464"" ! nvvidconv ! videocrop left=0 right=2624 top=0 bottom=1984 ! nvvidconv ! 'video/x-raw(memory:NVMM), width=640, height=480' ! perf print-arm-load=true !  nvoverlaysink display-id=1 sync=false
gst-launch-1.0 nvarguscamerasrc ! "video/x-raw(memory:NVMM), width=3264, height=2464" ! nvvidconv left=0 right=640 top=0 bottom=480 ! 'video/x-raw(memory:NVMM), width=640, height=480' ! perf print-arm-load=true !  nvoverlaysink display-id=1 sync=false

nvvidconv uses GPU but the ROI needs to be at least 1/16 of the input resolution or it will fail. Cropping with videocrop consumes around 8% of a single CPU core on the Jetson TX2.

There are several ways to work with GStreamer, in DeepStream examples the approach used is to use GStreamer as a library. The other approach is gst-launch, a command-line interface to create pipelines in a timely manner for prototyping. In our experience, using an approach similar to gst-launch (GStreamer Daemon) speeds up development and gives the code more maintainability since it is easier to isolate pipelines and debug them using gst-launch. You can check our example using GstD and GstInterpipes with DeepStream here

1 Like

1- I want to decode multi-stream RTSP using HW decoder of jetson nano, and I used gstreamer+opencv with python code, and this use NVDEC and it’s ok but first the decoded pushed to NVMM buffer and then copied into CPU buffer due to opencv only support CPU buffer, I want use gstreamer python code without opencv and get frames of streams in numpy array.

2- Input frame rate is 30 and output frame rate I want to be 10, this cause increased gradually in the queue of gstreamer, How I solved this problem, I used omxh264dec. for this propose I used videorate in gstemaer and this used cpu usage.

Do you have any idea about the problems?

Is it possible to give me a link for decoding RTSP with jetson nano, like this.
and I then use the codes in my custom code for processing.

Hi, I’m not sure if this is what you are referring to, but we use uridecodebin to decode RTSP. The element uses hardware decoding underneath.

uridecodebin don’t has source code?
How I can capture multi-stream decoded frames in numpy array?