Remove Gstreamer Support from OpenCV on Jetson Xavier

Hi,

I am trying to do some image processing using the Jetson Xavier and some USB cameras.

I am using a Kayeton camera as shown here which supports both MJPG and YUYV formats.

Here is the output when I run v4l2-ctl -d /dev/video0 --list-formats-ext:

v4l2-ctl -d /dev/video0 --list-formats-extioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'MJPG' (compressed)
	Name        : Motion-JPEG
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.033s (30.000 fps)

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUYV 4:2:2
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.033s (30.000 fps)

Now, I want to use the 1920x1080 MJPG format and simply display the camera feed on the screen.

Doing this on the Jetson Xavier has been extremely frustrating due to gstreamer included with OpenCV, I cannot simply do cv2.VideoCapture() and set the desired resolution.
I have managed to get some sort of output using the following gstreamer command:

cap_string = 'v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1920,height=1080,framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1'

This will show the video feed on the screen, however there is a huge delay (approx half a second) between what’s actually in front of the camera and what is displayed on the screen, and the video feed is extremely choppy. Furthermore, when I get the size of the frames they have size 1920x1088 instead of 1920x1080.

This problem does not occur when I use the YUY2 format with gstreamer using the following string:

cap_string = 'v4l2src device=/dev/video0 ! 'video/x-raw,format=YUY2,width=640,height=480,framerate=30/1' ! nvvidconv ! videoconvert ! 'video/x-raw,format=(string)BGR' ! appsink'

However this is the wrong size of video frames for my application, changing the dimensions in the above command to 1920x1080 gives me the error:

'GST_IS_ELEMENT (element)' failed

Now, when I use my desktop Ubuntu machine that has OpenCV without gstreamer I can simply do:

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)

And I can view the camera feed in real time without issues. However I wish to use the cameras on the Jetson as I will need to be capturing footage from a mobile set up. If I run the above on the Jetson I get the same error as earlier:

gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

My question is in two parts.

  1. Is it possible to overcome the problem I have on the Jetson without removing gstreamer support and installing a different version of OpenCV? I have tried just about every single combination of gstreamer commands I can find on this forum and this issue still persists.

  2. Failing above, how do I go about installing OpenCV without gstreamer support? In all honesty this seems to be more preferable as gstreamer is causing more problems than it solves.

Please let me know if you require any further information.

  • The short answer to your title and question 2 is just rebuilding opencv without gstreamer. You may get the script from:
    JEP/install_opencv4.5.0_Jetson.sh at master · AastaNV/JEP · GitHub
    and change in the cmake command from -D WITH_GSTREAMER=ON to -D WITH_GSTREAMER=OFF
    This script builds opencv-4.5.0, you may adjust variable version at the beginning of the script for a specific version.
    Better build on an external disk as it can use several GB. It may take some time to build.

  • For more details, you don’t need to remove gstreamer support from opencv, you can use other backends such as V4L or FFMPEG. You would just specify video capture backend:

cap = cv2.VideoCapture(0, cv2.CAP_V4L2)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)

but it would be better to use jetson HW for decoding, and this is available with gstreamer. Better have nvvidconv to convert into BGRx with HW so that videoconvert just has to remove the 4th extra byte per pixel:

cap = cv2.VideoCapture("v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,format=MJPG,width=1280,height=720,framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1", cv2.CAP_GSTREAMER)

In my case both are working fine, there is less than half a second between camera and display (using opencv imshow that is not so fast).

So the difference may be be in USB. I see that your camera only supports USB2. If you’re familiar with kernel boot args and extlinux.conf, you may try adding:

usbcore.autosuspend=-1 usbcore.usbfs_memory_mb=1000

into kernel boot args for increasing memory limit to 1000 MB.

Be aware that the CPU power on NX is not that of a recent PC. You may boost your NX with:

sudo nvpmodel -m 2
sudo jetson_clocks