h.264 video codec of gstreamer

I try to use GST on nano to realize the hard decoding function of video stream. The pipeline command is as follows:
(python dec.py --rtsp --uri “rtsp://”)
but errors are reported all the time after execution. The errors are shown in the figure below.
nvbuf_utils: Could not get EGL display connection
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/lwin/Downloads/opencv/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/home/lwin/Downloads/opencv/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer
I used this command to check the status of the camera.
(gst-launch-1.0 rtspsrc location=“rtsp://” ! rtph264depay ! h264parse ! omxh264dec ! nveglglessink )
replied as follows:
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED …

Using winsys: x11
ERROR: Pipeline doesn’t want to pause.
Setting pipeline to NULL …
Freeing pipeline …
I tried some solutions, but all failed. Can you give me some suggestions?

To access to EGL display you need to have GUI enabled on Jetson an run the script from local terminal and not from SSH.
If you have disabled GUI you can write directly to HDMI using nvdrmvideosink instead of nveglglessink

Anyway to check camera status, I suggest you to use:

gst-discoverer-1.0 rtsp://admin:Password@
Analyzing rtsp://admin:Password@
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Done discovering rtsp://admin:Password@

  unknown: application/x-rtp
    video: H.264 (High Profile)

  Duration: 99:99:99.999999999
  Seekable: no
  Live: yes
      video codec: H.264 (High Profile)

hi,simone.rinaldi,thank you for you reply.
I run command on nano, The error is as follows:
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/lwin/Downloads/opencv/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/home/lwin/Downloads/opencv/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer
It can’t be used either.
I used the command to check the camera,print out as follow chart.Is the camera normal?

Please check if the RTSP source works with nvoverlaysink:

gst-launch-1.0 rtspsrc location="rtsp://" ! rtph264depay ! h264parse ! omxh264dec ! nvoverlaysink

this is Reply to this order…

It looks like URI is not correct. You may inspect it first.

Hi, DaneLLL,thank you!
This is related to the camera. Because the manufacturer of the camera is different, the format of the user name and password is different when fetch stream. After the user name and password are modified according to the format specified by the manufacturer, the stream can be fetched normally.
excuse me for my bad English. best wish to you!

Recently, I tried to encode video stream on nano. I want to encode a mp4 video use H265 or H265 format, and then decode it to MKV or MP4 format. I don’t know this idea is correct?

the gst command of encode is : gst-launch-1.0 filesrc location=xxx.mp4 ! ‘video/x-raw, format=(string)I420, width=(int)640, height=(int)480’ ! omxh264enc ! ‘video/x-h264,stream-format=byte-stream’ ! filesink location=test0.h264

the gst command of decode is : gst-launch-1.0 filesrc location=test0.h264. ! h264parse ! omxh264dec ! filesink location=test1.mp4

Is this ideal right? thank you!

Your pipeline is not right. We have examples in user guide. Please check

Mkv, mp4, mov, and avi are just container formats that contain various video, audio, and metadata formats. h264, h265 are video codecs that go in the containers.

If you want to put a mp4 video containing h264 or h265 inside a .mkv container, you’re best off not re-encoding. Re-encoding always will lose quality, even if h265 is a better codec. Compression artifacts look like detail to compressors.

Instead you just need to demux (take apart) the .mp4 and remux (re-assemble) the streams inside the .mkv. You can do that with ffmpeg.

ffmpeg -i input.mp4 -codec copy output.mkv

If ffmpeg is not installed, you can install it with “sudo apt install ffmpeg”. If you absolutely do need to trasnscode (decode and re-encode), some examples to point you in the right direction can be found in the above post’s link.


Test, Test

Hello,thank you.
My intention is to realize hard codec of video stream on nano. I want to use gst command to realize it, but not use ffmpeg.
With the video test source of GST, I realized the codec to the display, but I didn’t know how to use the GST command in the code now.

There are some samples of launching GST pipeline in C code. FYR.

1 Like

Hi,thank you

I refer to your link, use videotestsrc as the video source, modify some the commands of code, but the generated MP4 format file cannot play normally (VLC), and the code does not report any errors during execution, this is too strange,can you give me some suggestions,thank you again.

#include <gst/gst.h>

using namespace std;

#define USE(x) ((void)(x))

static GstPipeline *gst_pipeline = nullptr;
static string launch_string;

GstClockTime usec = 1000000;
static int w =1920;
static int h = 1080;

int main(int argc, char** argv) {

gst_init (&argc, &argv);

GMainLoop *main_loop;
main_loop = g_main_loop_new (NULL, FALSE);
ostringstream launch_stream;

//<< "videotestsrc  ! "
//<< "video/x-raw(memory:NVMM),width="<< w <<",height="<< h <<",framerate=30/1,format=NV12 ! "
//<< "omxh264enc ! h264parse ! qtmux ! "
//<< "filesink location=test1.mp4 ";

<< "videotestsrc name=mysource num-buffers=120 ! "
<< "video/x-raw,width="<< w <<",height="<< h <<", ! "
<< "omxh264enc ! "
<< "video/x-h264,stream-format=byte-stream ! "
<< "filesink location=test1.mp4 ";

launch_string = launch_stream.str();

g_print("Using launch string: %s\n", launch_string.c_str());

GError *error = nullptr;
gst_pipeline  = (GstPipeline*) gst_parse_launch(launch_string.c_str(), &error);

if (gst_pipeline == nullptr) {
    g_print( "Failed to parse launch: %s\n", error->message);
    return -1;
if(error) g_error_free(error);

gst_element_set_state((GstElement*)gst_pipeline, GST_STATE_PLAYING); 


GstElement* src = gst_bin_get_by_name(GST_BIN(gst_pipeline), "mysource");
gst_element_send_event (src, gst_event_new_eos ());
// Wait for EOS message
GstBus *bus = gst_pipeline_get_bus(GST_PIPELINE(gst_pipeline));

gst_element_set_state((GstElement*)gst_pipeline, GST_STATE_NULL);

g_print("going to exit \n");
return 0;


The pipeline generates h264 stream. You need qtmux to save playable mp4.

<< "videotestsrc name=mysource num-buffers=120 ! "
<< "video/x-raw,width="<< w <<",height="<< h <<", ! "
<< "omxh264enc ! "
<< "video/x-h264,stream-format=byte-stream ! "
<b><< "h264parse ! qtmux ! "</b>
<< "filesink location=test1.mp4 ";

thank you very much. The problem was solved just as you said.

Now I need to study how to input video data for encoding, and then send it out through the network. What’s your good sharing case?


Referring to this topic, I don’t know how to solve it later.
I did a small test with this open source code but it always printed the following report during the actual debugging.

I tried many ways, Let me know if you have any questions or suggestions.thank you!

You may refer to tet-lauch.
It is a gstreamer sample of launching a RTSP server.

Attachments and Codes
encode_0.cpp (8.03 KB)