I am having some problem capturing rtsp stream coming from my ip camera.
I am using the following code:
gst = "rtspsrc location='rtsp://10.0.2.130:554/s1' name=r latency=0 ! rtph264depay ! h264parse ! omxh264dec ! appsink"
video_capture = cv2.VideoCapture(gst)
if video_capture.isOpened() == False:
print("VideoCapture Failed!")
sys.exit(1)
and I got this error:
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 881
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:881: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer
VideoCapture Failed!
I have been through almost all the related post on your forum and still struggling with this error.
I have tried the same options with gst-launch-1.0 and have not seen any error:
$ gst-launch-1.0 rtspsrc location='rtsp://10.0.2.130:554/s1' name=r latency=0 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://10.0.2.130:554/s1
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
NvMMLiteOpen : Block : BlockType = 261
TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen
NvMMLiteBlockCreate : Block : BlockType = 261
TVMR: cbBeginSequence: 1223: BeginSequence 640x368, bVPR = 0
TVMR: LowCorner Frequency = 100000
TVMR: cbBeginSequence: 1622: DecodeBuffers = 9, pnvsi->eCodec = 4, codec = 0
TVMR: cbBeginSequence: 1693: Display Resolution : (640x360)
TVMR: cbBeginSequence: 1694: Display Aspect Ratio : (636x360)
TVMR: cbBeginSequence: 1762: ColorFormat : 5
TVMR: cbBeginSequence:1767 ColorSpace = NvColorSpace_YCbCr709_ER
TVMR: cbBeginSequence: 1904: SurfaceLayout = 3
TVMR: cbBeginSequence: 2005: NumOfSurfaces = 16, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 2007: BeginSequence ColorPrimaries = 1, TransferCharacteristics = 1, MatrixCoefficients = 1
Allocating new output: 640x368 (x 16), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3464: Send OMX_EventPortSettingsChanged : nFrameWidth = 640, nFrameHeight = 368
TVMR: FrameRate = 30
TVMR: NVDEC LowCorner Freq = (100000 * 1024)
If I change the appsink to nvoverlaysink, I can see the feed on my screen.
Regarding my opencv installation, I am pretty sure it has all needed flags, as I can run my code with following gst with the local camera.
gst = "nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"
I appreciate any help.