Reading mp4 file via gstreamer in opencv

Hi,

How can I read an mp4 file via gstreamer pipeline in OpenCV. My current code and output is shown below:

Code

from imutils.video import FPS
import imutils
import time
import cv2

# Read mp4 via gstreamer pipeline
cap = cv2.VideoCapture('gst-launch-1.0 filesrc location=Calibration_footage.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! nvoverlaysink', cv2.CAP_GSTREAMER)
fps = FPS().start()

while cap.isOpened():
    ret_val, img = cap.read()
    cv2.imshow('CSI Camera',img)
    fps.update()

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

fps.stop()
print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))

cap.release()
cv2.destroyAllWindows()

Output

(python:10950): GStreamer-CRITICAL **: 21:15:53.390: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

Your pipeline is wrong for opencv. It should end with appsink.
Furthermore, gst-launch is a program for launching a pipeline, but not part of the pipeline.

Try this:

cap = cv2.VideoCapture('filesrc location=Calibration_footage.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx ! queue ! videoconvert ! queue ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)

This assumes your .mp4 video is encoded in h264.

1 Like

Thanks mate, this worked for reading the file. Is there any way to increase the FPS through this gstreamer pipline? i.e. make it read the file faster??

You would first boost your jetson if not yet done (nvpmodel -m0 and then jetson_clocks).

Furthermore, consider the bottleneck is probably in videoconvert. It should be able to run 30 fps with a lower resolution (try 640x480 at first):

cap = cv2.VideoCapture('filesrc location=Calibration_footage.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx,width=640,height=480 ! queue ! videoconvert ! queue ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)

Recent opencv versions (I’d say from 3.1 or 3.2) are also able to accept other formats than BGR. You may try to use I420 or NV12 formats so that you wouldn’t need videoconvert. This may be efficient if you process your frames from luminance for example. But if you need to convert into RGB in opencv for processing, this may be just a bit slower than videoconvert.
Also note for filesrc that an external disk might also be slow depending on how it is connected (consider a USB3 disk connected trough a USB2 hub to a USB3-able host would be as slow as USB2).

Hey guys, this was working fine on JetPack 4.2.1 but on 4.2.2 it’s not working anymore.

I get this error:

OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp, line 887
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp:887: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

[INFO] elapsed time: 0.00
[INFO] approx. FPS: 0.00
Segmentation fault (core dumped)

My code is pretty straightforward, running opencv 3.3.1 and python 3.6.8

from imutils.video import FPS
import imutils
import time
import cv2

# Read mp4 via gstreamer pipeline
cap = cv2.VideoCapture('filesrc location=Calibration_footage.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx ! queue ! videoconvert ! queue ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)
fps = FPS().start()

while cap.isOpened():
	ret_val, img = cap.read()
	gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
	cv2.imshow('CSI Camera',img)
	cv2.imshow('Gray', gray)
	fps.update()

	if cv2.waitKey(1) & 0xFF == ord('q'):
		break

fps.stop()
print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))

cap.release()

cv2.destroyAllWindows()

Is there meant to be a different pipeline for Jetpack 4.2.2??

Check that the opencv version installed for python3 has gstreamer support. For example:

echo 'import cv2; print(cv2.getBuildInformation())' | python3 | grep -A5 GStreamer

Hi !

I’m working with the dustynv jetson-inference repo and I would like to read an mp4 H264 video file through the gst pipeline. For the moment, i’m able to display the video file with the command :

gst-launch-1.0 filesrc location=file.mp4 ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! omxh264dec ! nveglglessink -e

But what I would like is nor finding a way to read this file through the gstCamera stuff or reading this file with openCV_VideoCapture and then use cudaFromNumpy to launch detectNet on the frames and then display it with the renderOnce method which is taking a cuda image as an input. For the moment, the opencv to cuda process is working but i’m not able to display the image processed by detectNet.

Hope I’ve been clear, have a good day ! :)

You may try this patch to jetson-utils.

Thanks for the quick reply, will give it a try asap.

Working, thx :)

how to support gstreamer?

For enabling gstreamer support in opencv, you may have to rebuild from source.
In configure step with cmake, you would have to set -D WITH_GSTREAMER=ON.
I’d suggest to start with this script.

ok. Thank you very much.

import cv2
filepath = “/home/nvidia/guoxiaolu/testvideo/v5_10.39.241.47.avi”

cap = cv2.VideoCapture(‘filesrc location={} ! qtdemux ! queue ! mpeg4videoparse ! omxmpeg4videodec ! nvvidconv ! video/x-raw,format=BGRx ! queue ! videoconvert ! queue ! video/x-raw, format=BGR ! appsink’.format(filepath), cv2.CAP_GSTREAMER)
print(cap.isOpened())
while True:
ret, frame = cap.read()
if not ret:
break

and it returns:
OpenCV(3.4.1-dev) Error: Unspecified error (GStreamer: unable to start pipeline

) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890

VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

OpenCV(3.4.1-dev) /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline

in function cvCaptureFromCAM_GStreamer

If I use command:
gst-launch-1.0 filesrc location=/home/nvidia/guoxiaolu/testvideo/v5_10.39.241.47.avi ! qtdemux ! queue ! mpeg4videoparse ! omxmpeg4videodec ! nvvidconv ! video/x-raw,format=BGRx ! queue ! videoconvert ! queue ! video/x-raw, format=BGR ! appsink

it returns:
Setting pipeline to PAUSED …

Pipeline is PREROLLING …

ERROR: from element /GstPipeline:pipeline0/GstQTDemux:qtdemux0: This file is invalid and cannot be played.

Additional debug info:

qtdemux.c(747): gst_qtdemux_pull_atom (): /GstPipeline:pipeline0/GstQTDemux:qtdemux0:

atom has bogus size 1380533830

ERROR: pipeline doesn’t want to preroll.

Setting pipeline to NULL …

Freeing pipeline …

This command is right:
gst-launch-1.0 playbin uri=file:///home/nvidia/guoxiaolu/testvideo/v5_10.39.241.47.avi

I used in Xavier agx, and compile opencv with gstreamer support
Video I/O:
DC1394: NO
FFMPEG: NO
avcodec: NO
avformat: NO
avutil: NO
swscale: NO
avresample: NO
GStreamer:
base: YES (ver 1.14.5)
video: YES (ver 1.14.5)
app: YES (ver 1.14.5)
riff: YES (ver 1.14.5)
pbutils: YES (ver 1.14.5)

Seems the gstreamer pipeline failed to start. Be sure the passed string is correct by trying it with gst-launch.

The main issue is using qtdemux. As avi file is not using qt nor mp4 container, it is not correct.
Just decode with such pipeline:

filesrc location=test.avi ! mpeg4videoparse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink

it still returns false when opencv, however, gst-launch-1.0 seemes correct:
Setting pipeline to PAUSED …

Opening in BLOCKING MODE

Pipeline is PREROLLING …

NvMMLiteOpen : Block : BlockType = 260

NVMEDIA: Reading vendor.tegra.display-size : status: 6

NvMMLiteBlockCreate : Block : BlockType = 260

Pipeline is PREROLLED …

Setting pipeline to PLAYING …

New clock: GstSystemClock

It makes me confused, cv2.VideoCapture(0) directly returns correctly (read usb camera), without any adding commands.

This is using V4L API, not gstreamer API, so the messages are different.

Be sure the filepath is correct. This works fine for me:

import os
import cv2
print(cv2.__version__)

filepath = "/home/nvidia/Desktop/opencv/opencv/samples/data/Megamind.avi"
if os.system('ls -l '+filepath):
	print('Error file not found')
	exit()

cap = cv2.VideoCapture('filesrc location={} ! mpeg4videoparse ! omxmpeg4videodec ! nvvidconv ! video/x-raw,format=BGRx ! queue ! videoconvert ! queue ! video/x-raw, format=BGR ! appsink'.format(filepath), cv2.CAP_GSTREAMER)

if  not cap.isOpened():
	print("Failed to open capture")
	exit()

while True:
	ret, frame = cap.read()
	cv2.imshow('Test', frame)
	cv2.waitKey(1)
1 Like

Thank you very much, this time get correct result!!!
Besides, why xavier agx read video is different nano and seems complex with wiered ? as opencv compile in xavier not support ffmpeg?

Hi,
I am trying to read mp4 video file using gstreamer and opencv. I used the pipeline mentioned in this post as solution :-

cap = cv2.VideoCapture('filesrc location={} ! qtdemux ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx,width=640,height=480 ! queue ! videoconvert ! queue ! video/x-raw, format=BGR ! appsink'.format(filepath), cv2.CAP_GSTREAMER)

but when I check cap.isOpened() its always false. Not able to read frames. please help.

I have xavier NX and jetpack 4.6
I recompiled opencv(with gstreamer) using this script

https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.5.0_Jetson.sh

You may first try pure gstreamer before trying opencv.

Try:

gst-launch-1.0 -v uridecodebin uri=file:///home/nvidia/Desktop/opencv/opencv/samples/data/Megamind.avi ! nvvidconv ! video/x-raw,format=YUY2 ! xvimagesink

# uridecodebin may use this way :
gst-launch-1.0 -v filesrc location=/home/nvidia/Desktop/opencv/opencv/samples/data/Megamind.avi ! avidemux ! mpeg4videoparse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=YUY2 ! xvimagesink

I think you are mixing mp4 container format that can be managed by qtmux with mpeg video version4 encoding.