How can I use gstreamer in my opencv program(on TX1 board)?

I want to decode IP camera’s H264 video stream and process it with opencv in realtime.I have tried videocapture(“rtsp://…”) function and the result of real-time displaying is terrible with delay,caton and mosaic.However,when I use gstreamer launch 1.0 command in command window,the live video has perfect result.So I want to use gstreamer in my opencv program,but how?I don’t have any idea and can’t find any examples. Can someone help me please?
Thanks!

Not sure it can handle your requirements, but you could read this as a starting point: https://devtalk.nvidia.com/default/topic/1001696/jetson-tx1/failed-to-open-tx1-on-board-camera/post/5117370/#5117370

Hello Honey_Patouceul!Thank you!But my opencv is 2.4.13 for some reason.I may not want to change to opencv3.So is there any solution?
Thank you!

The question is why you want to keep opencv-2.4.13 ?
If you want to keep the tegra optimizations of opencv4tegra, then you might not get gstreamer support.
If you want to keep 2.4 API, then you can compile your own version of opencv-2.4.13 (without tegra optimizations) enabling gstreamer support. However opencv3 API is no so different.

Hello Honey_Patouceul!I have studied the link that you introduced to me,however,I think it does not meet my point(decode IP camera’s H264 video stream).Thank you for your kindness!Is there any solution?

I was mainly replying about how to use gstreamer in opencv.
For your specific task, you have to use a different gst pipeline. Maybe using

h264parse ! omxh264dec

would perform h264 decoding.

I would suggest you get a working pipeline from shell with command gst-launch-1.0 and when it works use this pipeline in opencv.

Hello Honey_Patouceul!I have gotton a working pipeline from shell with command gst-launch-1.0 and it did perfect result.But when I use this pipeline in videocapture cap(“gst-launch-1.0 …”),my code cannot run with the message “cannot find camera.”

Have you built and installed a opencv-2.4.13 or opencv-3.2.0 with gstreamer-1.0 support ?

I installed jetpack.I think the opencv installed is opencv4tegra and it’s 2.4.13 opencv recording to the information of jetpack.Is there anything wrong?

Yes, as said in post 4, I think that opencv4tegra installed by JetPack doesn’t provide gstreamer-1.0 support.
So you have to build yourself and install another opencv library enabling gstreamer support. I would advice to go to opencv-3.2.0. Check this link for configuring/building/installing: http://dev.t7.ai/jetson/opencv/.
If you prefer keeping with opencv-2.4.13, check this one: http://docs.opencv.org/master/d6/d15/tutorial_building_tegra_cuda.html, there is a part for building 2.4.13 on TX1. You will have to configure with

-DWITH_GSTREAMER=ON

for gstreamer-1.0 support.

If I build 2.4.13 on TX1 and configure with “-DWITH_GSTREAMER=ON”,dose it also support rtps video stream?

If you have gstreamer support, you should be able to use rtsp in your gstreamer pipeline.
Check this thread for such a pipeline https://devtalk.nvidia.com/default/topic/930547/rtsp-server-gstreamer-pipeline

Thank you again!!.I want to use gstreamer in my opencv code to decode an hikvision IP cemera’s rtps video stream.I don’t know how to write code.I tried VideoCapture cap(“gst-launch-1.0 rtspsrc location=rtsp://admin:admin… latency=10 ! decodebin ! autovideosink”) but the cap cannot open.However when I run “gst-launch-1.0 rtspsrc location=rtsp://admin:admin… latency=10 ! decodebin ! autovideosink” command in ubuntu command window,I can preview the camera successfully.Can you tell me is there anything wrong in VideoCapture cap(“gst-launch-1.0 rtspsrc location=rtsp://admin:admin… latency=10 ! decodebin ! autovideosink”).If it’s right,I’ll try building 2.4.13 on TX1 and configure with “-DWITH_GSTREAMER=ON”.
Sorry to bother you so many times.

Well, if you want to get access in opencv to your pipeline, it should be something like:

const char* gst = "rtspsrc location=rtsp://admin:admin... latency=10 ! decodebin ! appsink"; // Your gst pipeline to opencv application
cv::VideoCapture cap(gst);

Hello Honey_Patouceul,now I have build opencv2.4.13 on TX1 with your advice in post 10.My code as follows displays a still picture not a live video,and the program does not run the opencv process part after the definition of “cap”.I feel so upset that I have to ask for your help again.Please do me a favour.Thank you.

Mat frame;
    //const char* gst="rtspsrc location=rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream ! rtpmp2tdepay ! tsparse ! tsdmux ! queue ! h264parse ! avdec_h264 ! autovideosink sync=false ! appsink";
    //VideoCapture cap(gst);
    VideoCapture cap("rtspsrc location=rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream latency=10 ! decodebin ! autovideosink sync=false ! appsink");
    //VideoCapture cap("rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream");
    if(!cap.isOpened())
    {
       printf("Camera not found\n");
       return a.exec();
    }
    while ( cap.isOpened() )
    {
        cap >> frame;
        if(frame.empty()) break;
        frame.resize(720,720);
        qDebug("frame\n");
        imshow("video", frame);
        cv::waitKey(1);
    }

Do you have some code before that displays a picture ?

If it displays a picture instead of video, it might just be the timeout too short in your loop for letting imshow() draw.
You should adjust to your framerate. For 30 fps, you can use cv::waitKey(33) for waiting for next frame.

You can also declare the window before the loop with:

cv::namedWindow("video", CV_WINDOW_AUTOSIZE);

Hello Honey_Patouceul!Unfortunately,the ways you told me doesn’t work and the result is still a picture that captured at the beginning not a live video.And now my code is as follows,I’ve found that there is no “def” or “before while” or “frame” in qDebug output window.So I think maybe it just run the gst-launch-1.0 command and stopped.

Mat frame;
    //const char* gst="rtspsrc location=rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream ! rtpmp2tdepay ! tsparse ! tsdmux ! queue ! h264parse ! avdec_h264 ! autovideosink sync=false ! appsink";
    //VideoCapture cap(gst);
    VideoCapture cap("rtspsrc location=rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream  width=(int)1920 height=(int)1080 framerate=25/1 ! decodebin ! autovideosink sync=false ! appsink");
    //VideoCapture cap("rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream");
    qDebug("def\n");
    if(!cap.isOpened())
    {
       printf("Camera not found\n");
       return a.exec();
    }
    qDebug("before while\n");
    cv::namedWindow("video", CV_WINDOW_AUTOSIZE);
    while ( cap.isOpened() )
    {
        cap >> frame;
        if(frame.empty()) break;
        frame.resize(720,720);
        qDebug("frame\n");
        imshow("video", frame);
        cv::waitKey(33);
    }

Not sure, but it looks strange to me to have two sinks in your pipeline. Maybe you can try to remove autovideosink from your pipeline when using appsink in your program.

Hello Honey_Patouceul!When I just leave “appsink” in my pipeline,like this" VideoCapture cap(“rtspsrc location=rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream width=(int)1920 height=(int)1080 framerate=25/1 ! decodebin ! appsink”);",and run my code,the error log is as follows.

(HCRSTP:1754): GLib-GObject-WARNING **: invalid cast from 'GstAppSink' to 'GstBin'

(HCRSTP:1754): GStreamer-CRITICAL **: gst_bin_iterate_elements: assertion 'GST_IS_BIN (bin)' failed

(HCRSTP:1754): GStreamer-CRITICAL **: gst_iterator_next: assertion 'it != NULL' failed

(HCRSTP:1754): GStreamer-CRITICAL **: gst_iterator_free: assertion 'it != NULL' failed
OpenCV Error: Unspecified error (GStreamer: cannot find appsink in manual pipeline
) in cvCaptureFromCAM_GStreamer, file /media/ubuntu/FA8654008653BBB7/opencv/modules/highgui/src/cap_gstreamer.cpp, line 743
terminate called after throwing an instance of 'cv::Exception'
  what():  /media/ubuntu/FA8654008653BBB7/opencv/modules/highgui/src/cap_gstreamer.cpp:743: error: (-2) GStreamer: cannot find appsink in manual pipeline
 in function cvCaptureFromCAM_GStreamer

Press <RETURN> to close this window...

And when I delete "decodebin ! "in my pipeline.The error log is as follows.

GStreamer Plugin: Embedded video playback halted; module udpsrc2 reported: Internal data flow error.
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /media/ubuntu/FA8654008653BBB7/opencv/modules/highgui/src/cap_gstreamer.cpp, line 816
terminate called after throwing an instance of 'cv::Exception'
  what():  /media/ubuntu/FA8654008653BBB7/opencv/modules/highgui/src/cap_gstreamer.cpp:816: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Press <RETURN> to close this window...

Hi Sully,

I’ve seen this post: https://devtalk.nvidia.com/default/topic/992583/jetson-tx1/streaming-opencv-output-via-rtsp-using-vlc-on-tx1/post/5087710/#5087710.

Maybe you could give a chance to opencv-3.2.0.