GPU accelerated reading from RTSP Stream


We have a video analytics solution for real time CCTV analytics. This product connects to CCTV in realtime over RTSP feed using GStreamer and OpenCV.

The product is built using Python.

** Problem Statement **

  1. RTSP Stream: An example RTSP video_source is: ‘rtsp://admin:admin12345@’
  2. The URI that we construct is as follows

uri = f"rtspsrc location={self.video_source} latency=0 ! decodebin ! videoconvert ! video/x-raw, format=BGR ! videoscale ! video/x-raw, width={self.frame_width}, height={self.frame_height} ! appsink"

  1. We open connection through URI using opencv as follows:
    cap = cv2.VideoCapture(uri)

  2. We get frames from this cap object as:
    ret, frame =

We are facing a problem that this process is very CPU intensive. On an i7-10th generation processor, the CPU is 100% utilized for just 10 camera streams.

We debugged and found out that decodebin and appsink in the overall pipeline are both RAM and CPU intensive.

Our OpenCV is built with Gstreamer.

** Search for a solution **

When we started to search a solution, we read this blog post: How to install Nvidia Gstreamer plugins (nvenc, nvdec) on Ubuntu? - LifeStyleTransfer
and understood that a NVDEC plugin will help a lot reducing the CPU. We were able to follow the steps in this blog post.

We then looked at this post to use Gstreamer to completely bypass OpenCV and directly get the frames from GStreamer for inference.

We were able to make this work as well. We followed code from GitHub - jackersson/gst-python-tutorials as mentioned in the blog post. We are able to get individual frames as mentioned in the blog:

array = np.ndarray(shape=(h, w, c), buffer=buffer.extract_dup(0, buffer.get_size()), dtype=utils.get_np_dtype(video_format))

** Still not able to solve **

However, we are not able to use both NVDEC and Gstreamer in python together. In order to get the frames in python, we are unable to bypass appsink, which we believe is the main culprit here.

May I know your system environment setup?


The setup is as follows

  1. Intel i5-8th Generation
  2. 8GB RAM
  3. GTX 1050 4GB
  4. Driver Version: 460.32.03
  5. CUDA Version 10.0.130
  6. Ubuntu 18.04
  7. GStreamer 1.14.5
  8. Opencv 4.3.0, locally compiled on the platform with ffmpeg and gstreamer bindings
  9. Video Codec version: Video_Codec_SDK_11.0.10
  10. Python 3.6.9


Hi @kshitij.sharma1 ,
You could try pipeline like below

gst-launch-1.0 -e rtspsrc location=rtsp:// ! decodebin ! nvvideoconvert ! appsink …


AFAIK, nvvideoconvert is available with DeepStream only. I am looking at this link: Ubuntu GStreamer warning: Error opening bin: no element "nvvideoconvert" - Stack Overflow

Will I be able to run deepstream on the above GPU, i.e., GTX 1050 or GTX 1060?

On that note, may I know if I can run deepstream on an RTX 2060?


yes, so, you don’t want to use DS?


I do want to use DS. I just didn’t know I could use it with GTX 1050 or GTX 1060 and RTX 2060.

Could you confirm that it is indeed usable with all three?

I have said YES. what else do you want me to coinfirm?

didn’t mean to offend. I was just confirming to ensure I got the communication right and my understanding didn’t have a problem. I will test it and let you know how it goes.

Many thanks.


As mentioned in Quickstart Guide — DeepStream 6.1.1 Release documentation , GTX 1080 is in the list.
And, according to " CUDA-Enabled GeForce and TITAN Products", 1060 and 1050 is the same GPU series of 1080.


Hi, we installed DeepStream on 1050 and were able to make this pipeline work:
uri = f"rtspsrc location=rtsp://admin:admin12345@ ! decodebin ! nvvideoconvert ! appsink"

Thank you for the pointers.

We also observed that this pipeline was a little slow because probably video scale is being done in CPU.
uri = f"rtspsrc location=rtsp://admin:admin12345@ ! decodebin ! nvvideoconvert ! videoscale ! video/x-raw, width={frame_width}, height={frame_height} ! appsink"

One thing that we observed is the frame shape is (1080, 1920) and it is a grayscale image. How should we get an RGB or BGR image? We tried this pipeline:
uri = f"rtspsrc location=rtsp://admin:admin12345@ ! decodebin ! nvvideoconvert ! video/x-raw, width={frame_width}, height={frame_height}, format=(string)RGBA ! appsink"

But got an error:
gst-plugin-scanner:21189): GStreamer-WARNING **: 15:23:26.037: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/': cannot open shared object file: No such file or directory

(python:21183): GStreamer-CRITICAL **: 15:23:26.937: gst_mini_object_copy: assertion 'mini_object != NULL' failed

(python:21183): GStreamer-CRITICAL **: 15:23:26.937: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

Hi Kshitij,

That’s great you were able to use deepstream. You could try the pipeline as follows:-

uri = f"rtspsrc location=rtsp://admin:admin12345@ latency=0 ! queue ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! video/x-raw, width={frame_width}, height={frame_height}, format=RGB ! appsink"

Let me know if this works.

1 Like

Hi Bharat,
the original issue has been resolved. We have a new problem in form of RGB buffers. I will create a new topic.