opencv problem with capturing rtsp stream using gstreamer on tx2

I am having some problem capturing rtsp stream coming from my ip camera.

I am using the following code:

gst = "rtspsrc location='rtsp://10.0.2.130:554/s1' name=r latency=0 ! rtph264depay ! h264parse ! omxh264dec ! appsink"
video_capture = cv2.VideoCapture(gst)
if video_capture.isOpened() == False:
    print("VideoCapture Failed!")
    sys.exit(1)

and I got this error:

OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 881
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:881: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

VideoCapture Failed!

I have been through almost all the related post on your forum and still struggling with this error.

I have tried the same options with gst-launch-1.0 and have not seen any error:

$ gst-launch-1.0 rtspsrc location='rtsp://10.0.2.130:554/s1' name=r latency=0 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://10.0.2.130:554/s1
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
NvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
TVMR: cbBeginSequence: 1223: BeginSequence  640x368, bVPR = 0
TVMR: LowCorner Frequency = 100000 
TVMR: cbBeginSequence: 1622: DecodeBuffers = 9, pnvsi->eCodec = 4, codec = 0 
TVMR: cbBeginSequence: 1693: Display Resolution : (640x360) 
TVMR: cbBeginSequence: 1694: Display Aspect Ratio : (636x360) 
TVMR: cbBeginSequence: 1762: ColorFormat : 5 
TVMR: cbBeginSequence:1767 ColorSpace = NvColorSpace_YCbCr709_ER
TVMR: cbBeginSequence: 1904: SurfaceLayout = 3
TVMR: cbBeginSequence: 2005: NumOfSurfaces = 16, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 2007: BeginSequence  ColorPrimaries = 1, TransferCharacteristics = 1, MatrixCoefficients = 1
Allocating new output: 640x368 (x 16), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3464: Send OMX_EventPortSettingsChanged : nFrameWidth = 640, nFrameHeight = 368 
TVMR: FrameRate = 30 
TVMR: NVDEC LowCorner Freq = (100000 * 1024)

If I change the appsink to nvoverlaysink, I can see the feed on my screen.

Regarding my opencv installation, I am pretty sure it has all needed flags, as I can run my code with following gst with the local camera.

gst = "nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"

I appreciate any help.

Seems the pipeline from your code lacks videoconvert between omxh264dec and appsink.

I have tried it with the videoconvert and still the same problem.

>>> import sys
>>> import cv2
>>> gst = "rtspsrc location='rtsp://10.0.2.130:554/s1' name=r latency=0 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink"
>>> video_capture = cv2.VideoCapture(gst)
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 881
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:881: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

>>> if video_capture.isOpened() == False:
...     print("VideoCapture Failed!")
...     sys.exit(1)
... 
VideoCapture Failed!

You may remove the quotes in python (or C++) string for gstreamer in opencv. These are only for shell with gst-launch.

gst = "rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov latency=0 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink"

seems to successfully launch. (I’m using R28.2-DP2 and opencv-3.4.1).

rtspTest.py.txt (460 Bytes)

Thanks.Removing the quotes did the trick.

Appreciated.

As far as I am concerned, I managed to resolve a similar issue of starting rtsp with CSI ov5693 locally and receiving the video stream from it within local network: https://devtalk.nvidia.com/default/topic/1032831/jetson-tx2/attempt-to-figure-out-csi-mipi-devboard-rtsp/

The code below start a video streaming server:

./test-launch "( nvcamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1, format=I420 ! nvvidconv flip-method=4 ! video/x-raw, width=720, height=480, framerate=30/1, format=I420 ! timeoverlay ! omxh265enc ! rtph265pay name=pay0 pt=96 )"

The code below pops up the video streaming window

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test  ! 'application/x-rtp, media=(string)video' ! decodebin ! videoconvert ! ximagesink

The above does play the video from CSI MIPI to the screenof the jetson. Moreover,if I adjust the firewall rules as it stated below ,it will be accessible networkwide:

sudo iptables -A INPUT -p tcp --dport 8554 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp --sport 8554 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Hi Honey_Patouceul,
I am trying to launch the code and then play it within python:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
import sys
import cv2

print (cv2.__version__)

gst = "rtspsrc location=rtsp://127.0.0.1:8554/test latency=0 ! rtph265depay ! h265parse ! omxh265dec ! videoconvert ! appsink"

cap = cv2.VideoCapture(gst)
if not cap.isOpened() :
        print("capture failed")
        exit()

ret,frame = cap.read()
while ret :
        cv2.imshow('frame',frame)
        ret,frame = cap.read()
        if(cv2.waitKey(1) & 0xFF == ord('q')):
                break;

cap.release()
cv2.destroyAllWindows()

Does it seem like the code that should work?
Thanks.

Hi Andrey,

This looks ok to me. The server string works on Xavier R31.1. But my TX2 has R28.2.0 and the server string doesn’t work with nvarguscamerasrc. It works however with nvcamerasrc instead. You may try both depending on your L4T release.

I have tried rtspTest.py.txt

3.4.3

(python:5991): GStreamer-CRITICAL **: 17:12:15.729: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed
VIDEOIO ERROR: V4L: device rtspsrc location=rtsp://admin:123456@192.168.10.57/stream0 latency=0 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink: Unable to query number of channels
capture failed

what are you executing?
tx2 doesn’t use nvarguscamerasrc , as it rather uses nvcamerasrc

I am trying the code below on nano

import sys
import cv2

print cv2.__version__

gst = "rtspsrc location=rtsp://admin:123456@192.168.10.57/stream0 latency=0 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink"


cap = cv2.VideoCapture(gst)
if not cap.isOpened() :
	print("capture failed")
	exit()

ret,frame = cap.read()
while ret :
	cv2.imshow('frame',frame)
	ret,frame = cap.read()
	if(cv2.waitKey(1) & 0xFF == ord('q')):
		break;

cap.release()
cv2.destroyAllWindows()

try these

sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
wget https://gstreamer.freedesktop.org/src/gst-rtsp/gst-rtsp-server-1.14.1.tar.xz
tar -xvf gst-rtsp-server-1.14.1.tar.xz
cd  gst-rtsp-server-1.14.1
cd examples
gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=3820, height=2464, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
import sys
import cv2

print (cv2.__version__)


gst = "rtspsrc location=rtsp://127.0.0.1/test latency=0 ! rtph265depay ! h265parse ! omxh265dec ! videoconvert ! appsink"


cap = cv2.VideoCapture(gst)
if not cap.isOpened() :
        print("capture failed")
        exit()

ret,frame = cap.read()
while ret :
        cv2.imshow('frame',frame)
        ret,frame = cap.read()
        if(cv2.waitKey(1) & 0xFF == ord('q')):
                break;

cap.release()
cv2.destroyAllWindows()

264 was used for Rpi,
nano supports 265

Thanks a lot, issue resolved

how to read video file?

gst-launch-1.0 filesrc location= ~/installers_packages/sampleh264.mp4 ! queue max-size-bytes=42000 max-size-buffers=0 max-size-time=0 ! qtdemux ! h264parse ! omxh264dec ! nvoverlaysink