Jetson TX2 RTSP Streaming, FFmpeg or Gstreamer?


I am using the Jetson TX2 dev board to work on real-time video processing.
Thanks to many other topics, I wrote this code :

#include "iostream"
#include "string"

#include "opencv2/opencv.hpp"
#include "opencv2/core.hpp"

int main()
// VideoCapture Pipe
std::string Cap_pipeline("nvarguscamerasrc ! "
"video/x-raw(memory:NVMM), width=1920, height=1080,format=NV12, framerate=30/1 ! "
“nvvidconv ! video/x-raw,format=I420 ! appsink”);

// VideoWriter Pipe
std::string Stream_Pipeline("appsrc is-live=true ! autovideoconvert ! "
"omxh264enc control-rate=2 bitrate=10000000 ! video/x-h264, "
"stream-format=byte-stream ! rtph264pay mtu=1400 ! "
“udpsink host= port=5000 sync=false async=false”);

cv::VideoCapture Cap(Cap_pipeline,cv::CAP_GSTREAMER);
cv::VideoWriter Stream(Stream_Pipeline, cv::CAP_GSTREAMER,
30, cv::Size(1920, 1080), true);

// check for issues
if(!Cap.isOpened() || !Stream.isOpened()) {
std::cout << “I/O Pipeline issue” << std::endl;

while(true) {
cv::Mat frame;
Cap >> frame; //read last frame
if (frame.empty()) break;

  cv::Mat bgr;
  cv::cvtColor(frame, bgr, cv::COLOR_YUV2BGR_I420);

  //video processing

  Stream.write(bgr);// write the frame to the stream

  char c = (char)cv::waitKey(1);
  if( c == 27 ) break;



return 0;

With this code, I can recover, process and live-stream the video coming from the onboard CSI camera to a client PC.
To read the stream, I can use GStreamer with the following command :
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! autovideosink sync=false async=false -e

or FFmpeg or VLC using an SDP file.

However, I would like to use an RTSP stream, so that a do not have to use an SDP file, which would more easy for the future.
I have two questions :

  • Is there a way to use a code similar to ./test- launch from gst-rtsp-server to stream the cv::Mat processed with openCV in RTSP?
  • Which library has better performance between FFmpeg and GStreamer for h264/h265 RTSP video streaming?

OpenCV version: 4.3.0 (instaled with JEP script :
Jetpack version: 4.4 DeepStream


The questions are more about OpenCV. Suggest go to OpenCV forum.

You may say you have a pipeline to run UDP streaming:

// VideoWriter Pipe
std::string Stream_Pipeline("appsrc is-live=true ! autovideoconvert ! "
"x264enc ! video/x-h264, "
"stream-format=byte-stream ! rtph264pay mtu=1400 ! "
“udpsink host= port=5000 sync=false async=false”);

And ask for suggestion of changing it to RTSP streaming. Once you have a working pipeline, you may replace x264enc with hardware encoder nvv4l2h264enc.

We are deprecating omx plugins and please use v4l2 plugins for video encoding/decoding.

1 Like

Hi @DaneLLL

Thank you for your help, I will ask this question on the OpenCV forum.

You may use the videowriter with a gstreamer pipeline producing the h264 stream and sending to shmsink, so that you can get it from shmsrc inside test-launch:

#include <iostream>

#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>

int main ()
  //setenv ("GST_DEBUG", "*:3", 0);

  /* Setup capture with gstreamer pipeline from onboard camera converting into BGR frames for app */
  const char *gst_cap =
    "nvarguscamerasrc  ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)640, height=(int)480, framerate=(fraction)30/1 ! "
    "nvvidconv    ! video/x-raw,              format=(string)BGRx ! "
    "videoconvert ! video/x-raw,              format=(string)BGR  ! "

  cv::VideoCapture cap (gst_cap, cv::CAP_GSTREAMER);
  if (!cap.isOpened ()) {
    std::cout << "Failed to open camera." << std::endl;
    return (-1);
  unsigned int width = cap.get (cv::CAP_PROP_FRAME_WIDTH);
  unsigned int height = cap.get (cv::CAP_PROP_FRAME_HEIGHT);
  unsigned int fps = cap.get (cv::CAP_PROP_FPS);
  unsigned int pixels = width * height;
  std::cout << " Frame size : " << width << " x " << height << ", " << pixels <<
    " Pixels " << fps << " FPS" << std::endl;

  cv::VideoWriter h264_shmsink
     ("appsrc is-live=true ! queue ! videoconvert ! video/x-raw, format=RGBA ! nvvidconv ! "
      "omxh264enc insert-vui=true ! video/x-h264, stream-format=byte-stream ! h264parse ! shmsink socket-path=/tmp/my_h264_sock ",
     cv::CAP_GSTREAMER, 0, fps, cv::Size (width, height));
  if (!h264_shmsink.isOpened ()) {
    std::cout << "Failed to open h264_shmsink writer." << std::endl;
    return (-2);

  /* Loop for 3000 frames (100s at 30 fps) */
  cv::Mat frame_in;
  int frameCount = 0;
  while (frameCount++ < 3000) {
    if (! (frame_in)) {
      std::cout << "Capture read error" << std::endl;
    else {
      h264_shmsink.write (frame_in);

  cap.release ();

  return 0;

Build, then launch it. It would write into /tmp/my_h264_sock.

So now you would launch your RTSP server with:

./test-launch "shmsrc socket-path=/tmp/my_h264_sock ! video/x-h264, stream-format=byte-stream, width=640, height=480, framerate=30/1 ! h264parse ! video/x-h264, stream-format=byte-stream ! rtph264pay pt=96 name=pay0 "

Just checked from localhost with:

gst-launch-1.0 -ev rtspsrc location=rtsp:// ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! videoconvert ! xvimagesink

…seems to work fine so far, at least in this case.

Got surprizing results with nvv4l2 encoder/decoder, so I propose using OMX plugins for now with this case.
For using VLC on receiver side, some extra work may be required.

Note that sockets are created by shmsink, but won’t be deleted if a shmsrc is still connected.
So after each trial when your app and test-launch are closed, remove any remaining socket with:

rm /tmp/my_h264_sock*

before trying again.

Hy @Honey_Patouceul

Indeed, this solution could work. I think I will do this temporary, but I’ll try to merge openCV and test-launch.c later to obtain something more user-friendly.

Thank you for your help,
Matteo Luci