DeepStream sdk 4.x reading ouput buffer of the nvdecode plugin

We have created a deepstream pipeline which reads from the mp4 filesource and does decode using the nvvl2decoder plugin. We are able to display the video stream using nveglglessink.

Our usecase involves sending the decoded frames over IPC to another process which is expecting it in Mat format.
Any help on how to extract the video frames and convert it to opencv Mat format is appreciated


I have a couple of questions and suggestions:

  • Where is DeepStream located in your pipeline? Before or after OpenCV?
  • What do you need to do with the stream in OpenCV? If it is something simple like an overlay or a simple algorithm it is far more efficient to do it with CUDA or a GPU accelerated GStreamer element.
  • I suggest using uridecodebin instead of manually selecting the encoder. It gives portability to the pipeline and it will always select the best encoder for your platform.
  • I know 4 ways of getting OpenCV to process a GStreamer buffer:
  1. Developing your own GStreamer plugging (advanced)
  2. Using appsink as a base to add processing at the end of the pipeline (intermediate)
  3. We offer an element to perform OpenCV algorithms in GStreamer Gst-OCV
  4. Use OpenCV VideoCapture with a GStreamer pipeline as argument and CAP_GSTREAMER to get an OpenCV VideoCapture from a GStreamer pipeline.

I hope this helps.

DeepStream is located before the opencv code. The reason we are using opencv is just to extract the raw frame buffer and send it over the network. We wanted to do all the cpu/gpu intensive stuff in the deepstream pipeline and then delegate the work of what to do with the frame to the application logic (another team is doing the processing).

Are you suggesting to use uridecodebin instead of the nvvl2decoder?
We eventually got the code working based on the appsink. From performance standpoint do you suggest we continue to use approach 2 vs approach 4 which readily accepts the gstreamer pipeline as an argument.

Thanks for the reply.


uridecodebin uses nvvl2decoder when it is available, it basically creates the decode pipeline on its own using the most possible hardware-accelerated elements. This gives the application portability. You could also switch the mp4 URI for RTSP without changing the pipeline.

I haven’t tested this, but I am 90% sure that the approach 2 has better performance that 4.

Have you considered encoding the frame and sending it with RTSP or WebRTC? We have worked with clients offering this kind of solution and it has worked pretty well for them. Since the encoder is HW-accelerated it doesn’t introduce a significant overhead and this way you can control the bitrate to avoid network congestion.

There is an open-source version of WebRTC here, you can give it a try. If you are interested in learning more, we provide a couple of custom solutions for WebRTC and RTSP: GstWebRTC, GStreamer WebRTC Wrapper, and GstRtspSink

We can always talk in more detail about it, send us your requirements to and or and we will be happy to help.