Python Decoder + Encoder only with Gstreamer and DeepStream

I want to use NVIDIA HW to get Multiple FHD RTSP streams and to encode processed streams using the same HW.

I worked with python environment on ubuntu

My 2 simple questions are :

  1. How do I get Gstreamer pipeline with access to image data ?
  2. How do I set encoder pipeline to stream frames outside

Decoder Side
I saw examples
But they do not enable access to image data for the decoder part

I tried to modify existing Decoder Gstreamer pipelines from

‘rtspsrc name=m_rtspsrc ! rtph264depay name=m_rtph264depay ! avdec_h264 name=m_avdech264 ! videoconvert name=m_videoconvert ! videorate name=m_videorate ! appsink name=m_appsink’)


‘rtspsrc name=m_rtspsrc ! rtph264depay name=m_rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! video/x-raw,width=1920,height=1080,format=RGBA !appsink name=m_appsink’)

it fail to identify “new-sample” event when I use for example
self.sink.connect(“new-sample”, self.new_buffer, self.sink)
were self.new_buffer should collect a frame

the solution in
is working but it is not flexible enough for my needs

Encoder Side

I tried to modify
self.launch_string = ‘appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ’ + caps_str +
’ ! videoconvert’
’ ! video/x-raw,format=I420’
’ ! x264enc tune=zerolatency speed-preset={} threads=0 {} ‘.format(speed_preset,
key_int_max) +
’ ! rtph264pay config-interval=1 pt=96 name=pay0’


    self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ' + caps_str + \
                         ' ! nvvideoconvert ! video/x-raw,width=1920,height=1080,format=RGBA '\
                         ' ! nvv4l2h264enc ! h264parse!'\
                         ' ! rtph264pay config-interval=1 pt=96 name=pay0' \

but it failed too

I will be happy to get concrete help how to create stable accessible Encoder & Decoder access to NVIDIA HW using python

Thank you

I know that the best way to access frame data (and any deepstream metadata) in the gstreamer pipeline is to utilize the dsexample plugin given with deepstream sdk, it allows you to manipulate frame data while it stays in gpu memory.

From the deepstream sdk:
“DsExample GStreamer plugin, gst-plugins/gst-dsexample , Template plugin for integrating custom algorithms into DeepStream SDK graph”

See this link:

and this one:


Thank you,
I look at the attached links,
My question still is how can I connect such pipelines to python environment
In order to be as efficient as can be I need to be able to access the image data directly from python


You can use dsexample to implement python binding and get the frames from/to the gpu.

BTW - What do you mean by not flexible enough on:
the solution in

is working but it is not flexible enough for my needs"


I searched for any example of dsexample python binding with no success,
I will be happy to see a simple example of how to do that

Regarding the filesink demonstration (“not flexible enough”) , It is fully functional but from my experience not stable in my scenario (tend to crush) and slows down the pipeline considerably

Dsexample is just c code with a Makefile located in deepstream sdk, in gst-plugins folder.

C-Python binding can be done with various tools e.g. ctypes or swig,

I fully understand it is possible, yet, as I am not a C programmer, this path is appealing for me only if there is concrete example.

Maybe Deepstream is not yet ready for such simple task.
Are there HW accelerated GSTREAMER plugins and pipelines recommended by NVIDIA for decoder and encoder in python environment?


I successfully used VAAPI (Video Acceleration API) in my application which use the GPU
but do not use the NVIDIA Decoder

Is there any other video encoding decoding API that actually use the HW Decoder and Encoder which is not DeepStream or alterantively a way to access image data using the DeepStream Plugins in Python