Write custom gstreamer element for deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU):jetson
• DeepStream Version: 5.0 GA
• JetPack Version (valid for Jetson only): 4.4
• TensorRT Version : 7.1.x

I want to write a custom gstreamer element for deepstream. suppose I want to write a simple operation like difference of two frames (motion detection), and this element pass frame into next element of motion is trigger, Is it possible do this type of work in gstreamer pipeline? and I want to have like this pipeline:

source > motion > streammux > inference > osd > render(show)

For this pipeline I have a question, If the motion element not passed the buffer for next element if not exist motion, How we can pass the frame into render ? My mean is jump from motion to render.


As far as I know, pushing the frame directly to a downstream element is not possible in GStreamer. But I thought of a solution that has almost the same results.

  1. Create the motion detect element based in videofilter. Add a signal when it detects motion and another signal when it stops detecting.
  2. Modify nvinfer to add a property that enables or disables a bypass mode. In this mode nvinfer pushes the buffer directly without performing inference.
  3. Create a controller application that listens for the signals emitted by motion detect and sets the new nvinfer property accordingly.

Thanks, @miguel.taylor,

pushing the frame directly to a downstream element is not possible in GStreamer.

, Is it not possible with tee element?
I’m new in this work, If possible give me more explain about these or give a link a sample code for this works, adding signals, create element with videofilter , …

Is it possible to modify nvinferin deepstream?
If possible link a sample code for second suggestion.

Sorry if I wasn’t clear: 1. 2. and 3. are not separate solutions, but steps for the same solution.

I think there is a way to do it with tee elements, but the problem with this solution is that either you are still processing the buffers with nvinfer consuming resources or you will need another element at the end that selects buffers from one stream or the other. Also, you need to consider synchronization issues. Overall I think this solution is more complicated.

Yes, nvinfer is one source, we have modified it in the past to add support for tinyyolo and gpg model encryption. nvtracker and nvdsosd are not open source, but the will not operate a buffer without detection meta.

Some references:

In my experience, the learning curve is a bit stiff for creating GStreamer custom plugins. We provide GStreamer Consulting and Development as part of our engineering services if you need help to develop your application.


Q1- NVVM buffer only is for jetson platform?
Q2- I want to write custom plugin and the before element has src caps video/x-raw(memory:NVMM) and I want to covert buffer to numpy array for drawing results to frames, My question is that converting buffer to numpy array is a optimal way? this push data from nvvm buffer to cpu buffer? and this cause use two times memory? If so, we know all things like buffer in gstreamer is pointer and jetson has share memory, why this should be copied two time?