OPENCV push stream into RTSP server

Hi all,
I’ve a problem in order to stream a video, using opencv, to a RTSP server.

I’m not interested to the possibility to use gstreamer RTSP server sink, so if you would like to suggest to use it, please avoid.

I have a fully working mediaserver that includes RTSP,RTMP,Websocket and HLS functionalities. I would like to push a H264 stream into it.
Server reference:

What I have (currently working):

# python stuff
rtmpUrl = 'rtmp://' + str(opt.camera_id) + ' live=1'
send_gst = "appsrc ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc bitrate="+ str(config['viewer']['bitrate']) +" ! video/x-h264,stream-format=(string)byte-stream,alignment=(string)au ! h264parse ! queue !  flvmux name=mux ! rtmpsink location='" + rtmpUrl+"'"
out_send = cv2.VideoWriter(send_gst, 0, stream_fps, (stream_width, stream_height))

All works properly but flvmux is not accelerated by hardware so CPU usage is very high, especially at high resolution.

What I want: I would like to stream directly H264 to RTSP, instead of RTMP, without muxing it in FLV. In this way I should prevent CPU usage.

I’ve tested this code:

# python stuff
rtspUrl = 'rtsp://' + str(opt.camera_id)
send_gst = "appsrc ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc bitrate=" + str(config['viewer']['bitrate']) + " ! h264parse ! rtspclientsink location='" + rtspUrl + "'"
out_send = cv2.VideoWriter(send_gst, 0, stream_fps, (stream_width, stream_height))    

but for something reason it doesn’t work, it remains stucked to:

Issue seems limited to OpenCV VideoWriter cause if I use gst-launch-1.0 all works properly, in fact using:
gst-launch-1.0 videotestsrc ! video/x-raw,format=I420,width=640,height=480 ! omxh264enc ! video/x-h264, stream-format=byte-stream ! rtspclientsink location=rtsp://
gst-launch-1.0 rtspsrc location=rtsp://admin:Password@ latency=500 ! queue ! rtph264depay ! h264parse ! rtspclientsink location=rtsp://

I’m going crazy, I cannot find a solution.
Has anyone an idea?

Not tried, but you may try:

  • replace nvv4l2h264enc by omxh264enc as it seems better. I had shortly experimented with UDP streaming and found that default profile used by nvv4l2h264enc was higher than omxh264enc in my case, and even setting the fastest preset, it was loosing sync through UDP while omxh264enc was keeping it (maybe I missed some options).
  • add h264parse after h264 encoder
  • add rtph264pay config-interval=1 between h264parse and rstpclientsink.
  • add queue after appsrc.

If it works, feel free to try removing any useless part.

Also note, if upgrading your opencv version/build that VideoWriter API in python may have changed (with second argument being now the used API, such as cv2.CAP_GSTREAMER or cv2.CAP_ANY), but it doesn’t seem to be your case since you have a working case.

The high CPU usage is from:

It actually does

copy BGRx CPU buffer to NVMM buffer -> convert to NV12 NVMM buffer

OpenCV support CPU buffer only, so the memcpy is must-have and cannot be eliminated.
And some usage is from

This is to convert BGR to BGRx through CPU.

There is a similar discussion:

Thank you DaneLLL.
I cannot do in another way, I alter the frame in Python and I have BGR Math frame.
So in order to encode it using nvv4l2h264enc I need to convert it in a compatible format before.
So I’ve added BGR to BGRx conversion.

The similar discussion: [Gstreamer] nvvidconv, BGR as INPUT
Was open by me…

I’m exploring the possibility to remove BGR conversion but I found a big problem: Opencv VideoWriter doesn’t support 16-bit depth images, so I cannot pass an RGBA (or other 16-bit format) directly to gstreamer.
I’m forced to pass an RGB or BGR frame and convert it in CPU using gstreamer, so no way to prevent CPU usage.

cv2.error: /home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp:1740: error: (-210) cvWriteFrame() needs images with depth = IPL_DEPTH_8U and nChannels = 3. in function CvVideoWriter_GStreamer::writerFrame

Thank you for your suggestion.
I’ve already tried to add rtph264pay before rstpclientsink but it returns an error. Gstreamer says that rtph264pay cannot be used before rstpclientsink.

In fact if I re-stream an IP camera using:

gst-launch-1.0 rtspsrc location=rtsp://admin:Password@ latency=500 ! queue ! rtph264depay ! h264parse ! rtspclientsink location=rtsp://

It doesn’t required after h264parse and works properly.


Sorry I did not notice it. So this is a known limitation of running OpenCV on Jetson platforms. sudo jetson_clocks can bring some improvement, but it may not be significant on Jetson Nano.

So for the moment, forget about RGB/BGR/… or other format.
How I can push a frame (with right format) using rtspclientsink in OpenCV?

I’m doing something wrong?

We don’t have experience of using rtspclientsink
Please go to to get suggestion.