Multiple OpenCV VideoCapture from multiple csi-mipi sensors freezing jetson

My project requires two images being taken at the same time.
I am using two of the leopard imaging imx377 csi-mipi cameras.
I have been using the gstreamer pipeline with opencv videocapture on single sensor fairly successfully. The problem is that when i start two opencv videocaptures eventually my jetson freezes up and stops responding to keyboard or mouse inputs. Here is an example of the code I am running

// Create OpenCV capture object, ensure it works.
    cv::VideoCapture cap1(pipeline1, cv::CAP_GSTREAMER);
    if (!cap1.isOpened()) {
        std::cout << "Connection failed";
        return -1;
    }
   cv::VideoCapture cap2(pipeline2, cv::CAP_GSTREAMER);
    if (!cap2.isOpened()) {
        std::cout << "Connection failed";
        return -1;
    }

My problems don’t happen every time I run the program.

Hi,

Have you built OpenCV from source?

Please noticed that the default OpenCV package doesn’t enable the GStreamer support.
You will need to build it from source.
Here is a building script for your reference: https://github.com/AastaNV/JEP/blob/master/script/install_opencv3.4.0_TX2.sh

Thanks.

Yes I built it from source, It works great on one videocapture. But the problem occurs when two are running. I noticed in a ridgerun blog post that they had parameters for two sensor ids in one gstreamer pipeline. Can opencv videocapture work from that?

Is the ridgerun gstreamer pipeline in this articlehttps://developer.ridgerun.com/wiki/index.php?title=Jetson_TX1/TX2_Multi_Camera_Exposure_Feedback_Control
using one pipeline with multiple sensors in it, or multiple streams?

Hi,

You will need to update the index of the camera in the API.
Please check the mount location of each camera first:

ll /dev/video*

Suppose you will have one mounted on /dev/video0 and other mounted on /dev/video1.
Try to set this information to the OpenCV API:
https://docs.opencv.org/4.0.0/d8/dfe/classcv_1_1VideoCapture.html#a57c0e81e83e60f36c83027dc2a188e80

§ VideoCapture() [1/3]
cv::VideoCapture::VideoCapture	(  )	
Python:
<VideoCapture object> = cv.VideoCapture( )
<VideoCapture object> = cv.VideoCapture( filename[, apiPreference] )
<VideoCapture object> = cv.VideoCapture( <b>index</b>[, apiPreference] )

Thanks.

I’m afraid I don’t understand your solution. I am also not using Python but C++ opencv. I was able to solve my problem by using videomixer in gstreamer and combining the streams from both cameras. I feel like there’s a better approach, but it seems to work.

Here’s an example from the function that generates my pipeline:

std::string get_tegra_pipeline(int width, int height, int fps) {
	return "videomixer name=mix sink_0::xpos=0 sink_1::xpos=" + std::to_string(width) + " ! videoconvert ! video/x-raw, format=(string)BGR ! appsink wait-on-eos=false drop=true max-buffers=4 sunc=false nvcamerasrc sensor-id=0 ! video/x-raw(memory:NVMM),width=(int)" + 
			std::to_string(width) + ",height=(int)" + std::to_string(height) + ",format=(string)I420,framerate=(fraction)" + std::to_string(fps) + 
			"/1 ! nvvidconv ! video/x-raw,format=(string)BGRx ! mix.sink_0 nvcamerasrc sensor-id=2 ! video/x-raw(memory:NVMM),width=(int)" + std::to_string(width) + ",height=(int)" + 
			std::to_string(height) + ",format=(string)I420,framerate=(fraction)" + std::to_string(fps) + "/1 ! nvvidconv ! video/x-raw,format=(string)BGRx ! mix.sink_1";
}