Issue with multi-camera gstreamer capture using OpenCV

Using a Xavier AGX, I am trying to capture frames from a pair of USB cameras using OpenCV and gstreamer. I am able to capture frames using my gstreamer pipeline for a single camera, but am unable to successfully load two gstreamer captures pipelines with OpenCV.

Below is some example code where I successfully capture an image with the cap0 pipeline (albeit with a warning), then open cap1 and am unable to capture an image.

Python 3.6.9 (default, Jan 26 2021, 15:33:00) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cap0 = cv2.VideoCapture('v4l2src device=/dev/video0 io-mode=2 ! image/jpeg, width=(int)3264, height=(int)2448, framerate=15/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 277 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 277 
[ WARN:0] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=7, duration=-1
>>> cap0.read()
(True, array([[[ 0, 28,  0],
        [ 0, 14,  0],
        [ 0, 11,  0],
        .
        .
        .
       ...,
        [ 0, 51, 62],
        [ 0, 45, 56],
        [ 0, 33, 45]]], dtype=uint8))
>>> cap1 = cv2.VideoCapture('v4l2src device=/dev/video1 io-mode=2 ! image/jpeg, width=(int)3264, height=(int)2448, framerate=15/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER + 1)
>>> cap1.read()
(False, None)

Am I opening the second capture pipeline incorrectly? Is this issue related to the warning when the first camera is opened? Capture from both cameras works using cv2.CAP_V4L2, but I want to use gstreamer as I have been able to attain a better frame-rate.

Using OpenCV 4.4.0 and two USB cameras with Sony IMX179 sensors. Below shows the supported capture formats.

$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'MJPG' (compressed)
	Name        : Motion-JPEG
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 320x180
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 424x240
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x360
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 848x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 960x540
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1600x1200
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 2592x1944
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 3264x2448
			Interval: Discrete 0.067s (15.000 fps)

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUYV 4:2:2
		Size: Discrete 1920x1080
			Interval: Discrete 0.200s (5.000 fps)
		Size: Discrete 320x180
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 2592x1944
			Interval: Discrete 0.500s (2.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 3264x2448
			Interval: Discrete 0.500s (2.000 fps)

Hi,
Please check if you can launch the two cameras in two gst-launch-1.0 commands. You can open two console windows and run identical commands like:

$ gst-launch-1.0 v4l2src device=/dev/videoX io-mode=2 ! image/jpeg,width=3264,height=2448,framerate=15/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! nvegltransform ! nveglglessink sync=0

Not sure, but is this deliberate ? What is the intent ?

@DaneLLL Thanks for the reply - yes I am able to stream from two cameras at once by running that command on two consoles using video0 and video1

@Honey_Patouceul I added the + 1 as this is how the indexing worked for capturing multiple images using cv2.CAP_V4L2. When I remove the + 1 It just crashes when opening the second camera (see below output).

Opening in BLOCKING MODE
Opening in BLOCKING MODE 
Opening in BLOCKING MODE
Segmentation fault (core dumped)

Hi,
We are able to open two USB cameras with the python code:

import sys
import cv2

def read_cam():
    cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw, width=640, height=480 ! videoconvert ! video/x-raw,format=BGR ! appsink ")
    cap1 = cv2.VideoCapture("v4l2src device=/dev/video1 ! video/x-raw, width=640, height=480 ! videoconvert ! video/x-raw,format=BGR ! appsink ")
    if cap.isOpened() and cap1.isOpened():
        cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
        cv2.namedWindow("demo1", cv2.WINDOW_AUTOSIZE)
        while True:
            ret_val, img = cap.read();
            ret_1, img1 = cap1.read();
            cv2.imshow('demo',img)
            cv2.imshow('demo1',img1)
            cv2.waitKey(10)
    else:
     print("camera open failed")

    cv2.destroyAllWindows()


if __name__ == '__main__':
    read_cam()

Please give it a try.

Thanks DaneLLL. We are getting closer! This works for YUYV 3264x2448 capture at 2FPS, but I am hoping to do MJPG 3264x2448 at 15FPS (as is supported by my USB cameras). Can you please share how to properly update the gstreamer pipeline for MJPG capture?

You may try:

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1 ")

# or
cap = cv2.VideoCapture("v4l2src device=/dev/video0 io-mode=2 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1 ")

# If none works, you may use CPU decoding but it may be slow:
cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! jpegparse ! jpegdec ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1 ")

Thanks @Honey_Patouceul. Looks like this might have pin pointed the issue. The first two pipelines work for individual camera capture, but fail in the same way as mentioned above when opening a second camera with OpenCV. The last pipeline works with two cameras, but provides a slower framerate as you guessed (the same frame rate as cv2.CAP_V4L2).

Do you have any other recommendations for accelerated MJPG capture with more than one camera? Could this be a camera issue? Any comments or suggestions are welcome.

Hi,
There is limitation in hardware engines. So for running with OpenCV, it consumes much CPU loading. Please check discussion in
[Gstreamer] nvvidconv, BGR as INPUT - #2 by DaneLLL

Since you need close-to-4K resolution, the memory copy may dominate the performance. Please execute sudo nvpmodel-m 0 and sudo jetson_clocks. These commands enable CPU cores at max clocks always and should bring performance improvement.

Or you may consider to run jetson_multimedia_api and apply OpenCV functions to NvBuffer.

Thank you @DaneLLL . I ran sudo nvpmodel -m 0 and sudo jetson_clocks as suggested, but I am still not even able to capture MJPG from two cameras at 480p. As such, it doesn’t seem to be an issue with CPU loading to me. What do you think?

I am open to running the cameras with other methods - just trying to maximise frame-rate at 8MP capture with multiple cameras.

This might be a USB issue. Be sure with lsusb -t that you’re using USB3 if your camera supports it.
Another possible issue might be that in UVC driver the first MJPG camera is requesting the full available bandwidth, but I can’t provide more help for debugging this as I don’t have any MJPG cam.