That proceesing to Open IP Camera with gstreamer and opencv only display a still picture.How to solve it?

Hello everyone!I have encountered a confusing problem on my jetson-TX2.
First,I tested “gst-launch-1.0 rtspsrc location=“rtsp://192.168.0.10:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream” latency=0 ! decodebin ! nvvidconv ! videoconvert ! xvimagesink sync=false” in command window and I got real-time video of my IP camera.
However,when I want to use gstreamer in the project to do opencv displaying.I tested the follow code which always show a still picture.Could someone help me?Best wishes to you all!

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
int main(void)
{     
	//cv::VideoCapture cap("rtspsrc location=rtsp://192.168.0.10:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream latency=0 ! decodebin ! videoconvert ! appsink");
	//cv::VideoCapture cap("rtspsrc location=rtsp://192.168.0.10:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream latency=0 ! 
	//					 decodebin ! videoconvert ! xvimagesink sync=false ! appsink");
	cv::VideoCapture cap("rtspsrc location=rtsp://192.168.0.10:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream ! decodebin ! nvvidconv ! videoconvert ! ximagesink sync=false ! appsink");
	if( !cap.isOpened() )
    {
		std::cout << "Not good, open camera failed" << std::endl;
        return 0;
    }
	std::cout << " Open IP camera successfully!" <<std::endl;
	cv::Mat frame;
    while(true)
    {
		cap >> frame;
        cv::imshow("Frame", frame);
        cv::waitKey(1);
    }
    return 0;
}

Your code is waiting for key input before it shows the next picture.
Are you pressing a key? Do you get the next picture after pressing the key?

Hi,snarky,
What i puzzle is which key i will press. I don’t know why my code is waiting and I also try press a key,but the result is not changed.Counld you give me more suggestions?

You system is NOT waiting for a keypress. a waitKey(1) specifies that the system will wait for about 1ms before moving on. Beyond that I am afraid I am not much help.

Doug

You system is NOT waiting for a keypress. a waitKey(1) specifies that the system will wait for about 1ms before moving on. Beyond that I am afraid I am not much help.

Doug

Hi,AeroClassics,
Thank you for your suggestions!

1 ms timeout is short (you are tuned for 1000fps)…You may adjust to your frame rate , and test if a frame has really been read.

Main problem seems to be the end of your pipeline with 2 sinks:

... videoconvert ! <b>ximagesink sync=false ! appsink</b>

ximagesink has no output, so your app receives nothing.
Could you try instead:

... videoconvert ! <b>video/x-raw, format=(string)BGR ! appsink</b>
1 Like

I was having a lot of issues with Gstreamer and OpenCV - you might find this thread useful (although i am using python)

The pipeline and code works to get the camera functioning on my TX2

https://devtalk.nvidia.com/default/topic/987537/jetson-tx1/videocapture-fails-to-open-onboard-camera-l4t-24-2-1-opencv-3-1/post/5145199/#5145199

I would make sure you compiled openCV with the gstreamer support flag on (if you have not done this already!)

If opencv has no gstreamer support, I would expect the videoCapture open to fail.

On TX2 (I’m running L4T 27.0.1), the only way for me to get it working from onboard camera into an opencv-3.2.0/C++ application was with planar (I420) format, as suggested in https://devtalk.nvidia.com/default/topic/1001696/jetson-tx1/failed-to-open-tx1-on-board-camera/post/5117370/#5117370.
This example, although not optimal, shows how to get frames in opencv. This might be useful for checking correct frame reading, autosize window allocation. Be aware that imshow needs some time for drawing. If you don’t provide a preallocated window, it has to create one for each frame and this may take some time.

If not yet done, I’d also suggest to boost clocks with

sudo ~/jetson_clocks.sh

Hi,Honey_Patouceul,Sorry to bother you again.
I have changed my code according to what you say.

const char* gstIPCam = "rtspsrc location=rtsp://192.168.0.10:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream ! \
			decodebin !\
			nvvidconv !\
			videoconvert !\
			video/x-raw,format=(string)BGR ! \
			appsink";
	
	cv::VideoCapture cap(gstIPCam);

But there is still an error.

./ShowIPCameraFrameInside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingNvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7818: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
TVMR: cbBeginSequence: 1190: BeginSequence  1280x720, bVPR = 0
TVMR: LowCorner Frequency = 100000 
TVMR: cbBeginSequence: 1583: DecodeBuffers = 10, pnvsi->eCodec = 4, codec = 0 
TVMR: cbBeginSequence: 1654: Display Resolution : (1280x720) 
TVMR: cbBeginSequence: 1655: Display Aspect Ratio : (1280x720) 
TVMR: cbBeginSequence: 1697: ColorFormat : 5 
TVMR: cbBeginSequence:1711 ColorSpace = NvColorSpace_YCbCr601
TVMR: cbBeginSequence: 1839: SurfaceLayout = 3
TVMR: cbBeginSequence: 1936: NumOfSurfaces = 14, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 1938: BeginSequence  ColorPrimaries = 0, TransferCharacteristics = 0, MatrixCoefficients = 0
Allocating new output: 1280x720 (x 14), ThumbnailMode = 0
TVMR: FrameRate = 38 
TVMR: NVDEC LowCorner Freq = (126666 * 1024) 

(ShowIPCameraFrame:3379): GStreamer-CRITICAL **: gst_element_get_static_pad: assertion 'GST_IS_ELEMENT (element)' failed

(ShowIPCameraFrame:3379): GStreamer-CRITICAL **: gst_pad_get_current_caps: assertion 'GST_IS_PAD (pad)' failed

(ShowIPCameraFrame:3379): GStreamer-CRITICAL **: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(ShowIPCameraFrame:3379): GStreamer-CRITICAL **: gst_structure_get_int: assertion 'structure != NULL' failed

(ShowIPCameraFrame:3379): GStreamer-CRITICAL **: gst_structure_get_int: assertion 'structure != NULL' failed

(ShowIPCameraFrame:3379): GStreamer-CRITICAL **: gst_structure_get_fraction: assertion 'structure != NULL' failed
TVMR: cbDisplayPicture: 3889: Retunred NULL Frame Buffer 
TVMR: TVMRFrameStatusReporting: 6266: Closing TVMR Frame Status Thread -------------
TVMR: TVMRVPRFloorSizeSettingThread: 6084: Closing TVMRVPRFloorSizeSettingThread -------------
TVMR: TVMRFrameDelivery: 6116: Closing TVMR Frame Delivery Thread -------------
TVMR: NvMMLiteTVMRDecBlockClose: 8018: Done 
Frame size : 0 x 0, 0 Pixels 
GStreamer Plugin: Embedded video playback halted; module udpsrc4 reported: Internal data flow error.
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in icvStartPipeline, file /home/nvidia/opencv-3.1.0/modules/videoio/src/cap_gstreamer.cpp, line 393
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/nvidia/opencv-3.1.0/modules/videoio/src/cap_gstreamer.cpp:393: error: (-2) GStreamer: unable to start pipeline
 in function icvStartPipeline

Aborted (core dumped)

How to solve it?

I have this pipeline working for VideoCapture in opencv-3.2.0:

const char *gst =   "rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov ! application/x-rtp, media=(string)video \
		   ! decodebin    ! video/x-raw, format=(string)NV12 \
		   ! videoconvert ! video/x-raw, format=(string)BGR \
		   ! appsink";

Does it work for you ?

Hi Honey_Patouceul,Thank you very much!
I have solve it! I have changed my code as followed.

gchar *descr = g_strdup(
            "rtspsrc location=rtsp://192.168.0.10:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream protocols=tcp latency=0 ! "
				"decodebin !"
				"videoconvert !"
				"appsink name=sink caps=video/x-raw,format=BGR sync=false"
        );