Gstreamer pipeline failed to open IP camera in cv::VideoCapture function?

Hello everyone!
I’ve been trying to solve a problem for two week.The problem is that I want to use gstreamer to speed the opencv function named “VideoCapture” but the function named “cap.isOpened()” is return false,so I can’t get video frame to do processing.

Firstly,I used

VideoCapture cap("rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream")

,video display effect is very poor with delayed,caton and mosaic.Someone here told me that I can use gstreamer to speed but he didn’t give detailed information.

Secondly,I tested the link https://devtalk.nvidia.com/default/topic/1001696/jetson-tx1/failed-to-open-tx1-on-board-camera/post/5117370/#5117370 to check if there was gstreamer in opencv and the result was fine that I can see the on-board camera video.

Thirdly,I tested

gst-launch-1.0 rtspsrc location="rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream" latency=0 ! decodebin ! autovideosink

in shell and the result was fine that I can see my IP camera video.

Then I tested many kinds of gstreamer pipelines in the “VideoCapture” function,and have not see my IP camera video successfully.My code is as follows.

#include <iostream>

#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui.hpp>

int main()
{
    const char* gst =  "rtspsrc location=\"rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream\" ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)25/1 ! \
            nvvidconv flip-method=2 ! video/x-raw, format=(string)I420 ! \
            videoconvert ! video/x-raw, format=(string)BGR ! \
            appsink";

    cv::VideoCapture cap(gst);

    if(!cap.isOpened())
    {
    std::cout<<"Failed to open camera."<<std::endl;
    return -1;
    }

    unsigned int width = cap.get(CV_CAP_PROP_FRAME_WIDTH);
    unsigned int height = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
    unsigned int pixels = width*height;
    std::cout <<"Frame size : "<<width<<" x "<<height<<", "<<pixels<<" Pixels "<<std::endl;

    cv::namedWindow("MyCameraPreview", CV_WINDOW_AUTOSIZE);
    cv::Mat frame_in(width, height, CV_8UC3);

    while(1)
    {
        if (!cap.read(frame_in)) {
        std::cout<<"Capture read error"<<std::endl;
        break;
        }
    else {
        cv::imshow("MyCameraPreview",frame_in);
            cv::waitKey(1000/25); // let imshow draw and wait for next frame 40 ms for 25 fps
    }
    }

    return 0;
}


I think it is the gstreamer pipeline I used is wrong.

BTW,my platform is jetson TX1,qt5.5,opencv2.4.13 which has been rebuild with gstreamer1.0.
Thank you for seeing here,I would appreciate it if you could give me some advice.

Hi,

Thanks for your question.

I wrote a sample to open IP camera via opencv+gstreamer.
Framerate is around 24fps. Hope this helps.

Please also let us know the results.
Thanks

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>

int main(void)
{
    cv::VideoCapture cap("uridecodebin uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov ! videoconvert ! videoscale ! appsink");

    if( !cap.isOpened() )
    {
        std::cout << "Not good, open camera failed" << std::endl;
        return 0;
    }

    cv::Mat frame;
    while(true)
    {
        cap >> frame;
        cv::imshow("Frame", frame);
        cv::waitKey(1);
    }
    return 0;
}

Hello AastaLLL, thanks for your reply!I copied all of your code to test.The result is that I can see the funny animated videos but the real-time effect is poor with delay,caton and mosaic.The output window is copied as follows.

VIDEOIO ERROR: V4L: device uridecodebin uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov ! videoconvert ! videoscale ! appsink: Unable to query number of channels
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingNvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7580: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
TVMR: cbBeginSequence: 1166: BeginSequence  240x160, bVPR = 0, fFrameRate = 24.000000
TVMR: LowCorner Frequency = 100000 
TVMR: cbBeginSequence: 1545: DecodeBuffers = 4, pnvsi->eCodec = 4, codec = 0 
TVMR: cbBeginSequence: 1606: Display Resolution : (240x160) 
TVMR: cbBeginSequence: 1607: Display Aspect Ratio : (240x160) 
TVMR: cbBeginSequence: 1649: ColorFormat : 5 
TVMR: cbBeginSequence:1660 ColorSpace = NvColorSpace_YCbCr709
TVMR: cbBeginSequence: 1790: SurfaceLayout = 3
TVMR: cbBeginSequence: 1868: NumOfSurfaces = 8, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
Allocating new output: 240x160 (x 10), ThumbnailMode = 0
reference in DPB was never decoded
reference in DPB was never decoded
reference in DPB was never decoded
reference in DPB was never decoded
reference in DPB was never decoded
reference in DPB was never decoded
TVMR: FrameRate = 24 
TVMR: NVDEC LowCorner Freq = (80000 * 1024) 
init done
opengl support available
TVMR: FrameRate = 24.000038 
TVMR: PulldownTSDiff = 416666 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 6.428801 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 8.158270 
TVMR: FrameRate = 13.032152 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 3.436721 
TVMR: FrameRate = 11.804062 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 7.007301 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 7.007301 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 2.929759 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 10.434786 
TVMR: FrameRate = 7.826257 
TVMR: FrameRate = 4.149808 
TVMR: FrameRate = 15.913026 
TVMR: FrameRate = 15.317853 
TVMR: FrameRate = 22.501440 
TVMR: FrameRate = 4.085107 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038 
TVMR: FrameRate = 24.000038

Sorry to bother you again.I’ve tested my pipeline which is changed from yours,it was

cv::VideoCapture cap("uridecodebin uri=rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream latency=0 ! videoconvert ! videoscale ! appsink");

,the result was I could see my IP camara video successfully,thank you again!However I found that the video stream viewed speed is so slow that after a period of time of running, the video was delayed for four minutes.This is unacceptable in my real-time program.Do you have some suggestions?

4 mins is quite long~ Do you also have such problem using other rtsp client?

There are some property for uridecodebin. Worth trying??
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-uridecodebin.html

For performance, if not yet done, activate all CPUs and boost the clocks with:

sudo ~/jetson_clocks.sh

Hello Githuber!Thanks for your kindness!I’ve found a link http://gstreamer-devel.966125.n4.nabble.com/Usage-of-rtspsrc-instead-of-uridecodebin-td4673701.html#a4673771.From this ironman’s question,I think that it is the default latency(2000 msec) of “uridecodebin” which lead to the delay of my real time video. However I can’t understand aborilov’s reply and I have not searched any useful information about the reply.More unfortunately,I cannot contact ironman or aborilov even if I have subscribed to this mailing list.Could you have any ideas?Look forward to your reply.

Hello Honey_Patouceul!Thank you very much.However I have two problems.One is that,I don’t understand how to activate all CPUs,and I feel puzzled that it seems that gstreamer’s decoding use hardware,the hardware means CPU? The second problem is I tested

gst-launch-1.0 rtspsrc location="rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream" latency=0 ! decodebin ! autovideosink

in ubuntu’s shell window and the result was fine that I can see my IP camera video with no delaying.When I leave out “latency=0” and tested

gst-launch-1.0 rtspsrc location="rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream" ! decodebin ! autovideosink

,the video is delayed.So I feel it is may not the issue of jetson_clocks.

I am so lucky that I have found this linkhttp://stackoverflow.com/questions/23570572/using-custom-camera-in-opencv-via-gstreamer,from which I changed my code to

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>

int main(void)
{
    cv::VideoCapture cap("rtspsrc location=rtsp://admin:admin12345@192.168.1.64:554/h264/ch33/main/av_stream" latency=0 ! decodebin ! <b>videoconvert </b>! appsink");

    if( !cap.isOpened() )
    {
        std::cout << "Not good, open camera failed" << std::endl;
        return 0;
    }

    cv::Mat frame;
    while(true)
    {
        cap >> frame;
        cv::imshow("Frame", frame);
        cv::waitKey(1);
    }
    return 0;
}

and it has real-time result without delay.PS:When I used 19201080 video,there also has delay and caton between video frames,so I changed resolution to 1280720,and it is real time now.
Now I need to fix the problem that how to process the mat and present the processed opencv mat to qt GUI.I have tested and found that this thread’s signal cannot goto qt GUI main thread.Oh!Headache!

HOLA HE INTENTADO TRABAJAR CON OPENCV, PERO TENGO EL PROBLEMA AL TRABAJAR CON CAMARA IP DE HIKVISION, PUEDO LA SEGUENCIA DE IMAGENES , PERO PASADO UN PERIOD DE TRANSMISION DEL VIDEO SE PRODUCE UNA EXEPCION, EL OTRO PROBLEM A QUE ENCONTRE ES QUE AL MOMENTO DE PROCESAR LA LAS IMAGENNES EN TIEMPO REAL LA CAPTURA SE VUELVE LENTA, SERA QUE OPENCV SE LIMITA AL PROCESAMIENTO DE VIDEO, ADEMAS AL ALAMCERNAR LA SECUENCIA DE IMAGENES SE PRODUCE EXEPCCION AL EMPLEAR LA FUCNION video.write(frame2);//GRABAMOS EL VIDEO

CUAL SERIA LA SOLUCIÓN A ESTE TIPO DE PROBLEMAS ESTIMADO, ADJUNTO EL CÓDIGO Y COMO USAR EL GSTREAMERPARA SOLUCIONAR EL PROCESAMIENTO DE VÍDEO, ESTOY TRABAJANDO CON OPENCV3.4.1

#include<opencv2/opencv.hpp>

#include <opencv2/calib3d.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/videoio.hpp>
#include <opencv/highgui.h>
#include <opencv/cv.h>
#include <opencv2/core/utility.hpp>
#include <opencv2/imgproc.hpp>// Gaussian Blur
#include <opencv2/imgcodecs.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
#include<opencv2/core/core.hpp> // Basic OpenCV structures (cv::Mat, Scalar)
#include
#include
#include <stdio.h>
#include
#include
#include
#include
#include
#include
#include <stdlib.h>
#include
#include <ctype.h>
#include <math.h>
#include <tchar.h>
#include <windows.h>
#include //
#include //
#include <concurrent_queue.h>//

using namespace std;
using namespace cv;

int main() {

cv::Mat frame1, edges;

//cv::VideoCapture vcap;
cv::Mat dst, cdst;
cv::Mat frame2, frame3;
/*cv::VideoCapture cap2("rtsp://admin:hik12345@192.168.1.9:554/h264/ch1/main/av_stream");

if (!cap2.isOpened()) {
std::cout << "Error opening video stream or file" << std::endl;
return -1;
}
else
{


std::cout << " la camara IP 2 se abrio con exito!" << std::endl;
cv::Mat frame;
}*/

VideoCapture cap1("rtsp://admin:hik12345@192.168.1.16:554/h264/ch1/main/av_stream");

if (!cap1.isOpened()) {
	std::cout << "Error opening video stream or file" << std::endl;
	return -1;
}
else
{


	std::cout << " la camara  IP 1 se abrio con exito!" << std::endl;
	cv::Mat frame;
}

int frame_width = cap1.get(CV_CAP_PROP_FRAME_WIDTH);
int frame_height = cap1.get(CV_CAP_PROP_FRAME_HEIGHT);
VideoWriter video("data .avi", CV_FOURCC('M', 'J', 'P', 'G'), 30, Size(frame_width, frame_height));//Almacenamos video en disco

CascadeClassifier detector;//clasificador  haar de rostro

if
	(!detector.load("C:\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt.XML"))
	cout << "no se puede abrir clasificador" << endl;

for (;;) {

	cap1 >> frame1;
	/*if (!vcap.read(frame1)) {
	//std::cout << "No frame" << std::endl;
	cv::waitKey();
	}*/
	//	video.write(frame2);//GRABAMOS EL VIDEO
	//video.write(frame1);//GRABAMOS EL VIDEO


	//Canny(frame1, edges, 0, 30, 3);//detectamos  los bordes del entorno*/


	cvtColor(frame1, edges, CV_BGRA2GRAY);
	//GaussianBlur(edges, frame2, Size(5, 5), 1.5, 1.5);
	// Canny(frame2, frame3, 0, 30, 3);



	//Canny(cdst, edges, 0, 30, 3);
	//GaussianBlur(cdst, edges, Size(5, 5), 1.5, 1.5);//aplicamos filtro gaussiano con la finalidad de eliminar el ruido
	//Canny(frame1, frame2, 0, 30, 3);//detectamos  los bordes del entorno*/
	//cv::namedWindow("image2");
	cv::namedWindow("camera1");
	cv::resizeWindow("camera1", 520, 520);//asignamos la ventana 
    cv::resizeWindow("image2", 520, 520);//asignamos la ventana 
	imshow("camera1", frame1);
	imshow("image2", edges);
	//imshow("image3", frame3);
	//imshow("image", edges);
	if (cv::waitKey(30) >= 0) break;
}

return 0;

}