Regarding the custom probes implementation in Deepstream application

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU (Tesla)
• DeepStream Version 6.1.1.
• JetPack Version (valid for Jetson only) NA
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)

We are building a custom Deepstream application with YOLO, Tracker, and Slow Fast models. However, we wanted to build a custom Tracker and a custom Slow Fast model instead of using the existing models in the Deepstream.

Could you please help me answer the following questions in regard to the above problem statement:

  1. Can an output of the probe be used as input for the GST element?
  2. Does implementing a custom probe result in any reduction in the latency? or usage of GPU capabilities that the Deepstream provides?
  3. Can we have multiple probes and use the first probe output for the next probe?

We are developing in a Python environment.

Could you please share if there are any alternate ways to implement the custom models? Any resources are appreciated.

Thanks in advance :)

What do you mean by the “output of the probe”? The pad probe functions are just callbacks evoked by some specific pad state. GstPad, GstPad.

No. Some probe types will block pads. GstPad Even with the non-blocking probe types, the processing in the callback will not be accelerated.

Yes but not necessary. The probe type is a mask. GstPad

Please make sure you are familiar with GStreamer knowledge and coding skills before you start with DeepStream. If you want to use python, please make sure you are familiar with gst-python too. Python GStreamer Tutorial (brettviren.github.io) This is DeepStream forum. We will focus on DeepStream here.

There are already samples of customized model with pyds. deepstream_python_apps/apps/deepstream-ssd-parser at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com). DeepStream never provides any models, the models in the sample are just samples for how to deploy a model with DeepStream APIs.

All the c/c++ method of deploying customized model with DeepStream also adapt to pyds. Using a Custom Model with DeepStream — DeepStream 6.3 Release documentation

deepstream_python_apps/bindings at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

What do you mean by the “output of the probe”? The pad probe functions are just callbacks evoked by some specific pad state. [GstPad]

The output of the probe - In the probe function, we are doing some processing on the frames.

Now, we would like to access the above processed frames in the Gst pipeline. For example, we would like to stream the processed frames to the RTSP sink, that we created as a sink element in the pipeline. Is this feasible?

Thank you

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

So you want to change the content of the GstBuffer. Yes, you can. But there are some limitations. You can not do any time consuming processing in the probe function which will hold the GstBuffer and block the whole pipeline. And you can not do any processing which will impact caps Caps (gstreamer.freedesktop.org). These are basic GStreamer knowledge and coding skills. Please refer to GStreamer sources.

The video data in the CUDA based HW buffer NvBufSurface is attached in GstBuffer. There is already lots of samples of how to get NvBufSurface from GstBuffer. Please refer to the samples in /opt/nvidia/deepstream/deepstream/sources

Please study GStreamer by yourself. This is DeepStream forum. We will focus on DeepStream here.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.