OpenCV ops blocking for multi-camera capture with Gstreamer

Hello, I have a Python application that does object detection on videos from 2 CSI cameras attached to the Jetson Nano dev board. On one thread, the app captures frames from the 2 cameras (sequentially) with the following gstreamer pipeline

nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)120/1 ! nvvidconv flip-method=2 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink drop=true max-buffers=5

and another thread reads the frames and does the inference. The app has several inactive moments during which it releases the VideoCapture objects and reinitializes it when required. After sometime of active use the calls to OpenCV VideoCaptures read, open, release block. The CPU usage is usually high at the time. This occurs only sometimes. If I restart the application, then I get

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:543 Failed to create CaptureSession

which gets resolved if I restart the nvargus-daemon system service. I am unable to understand why and how this happens and if there is a solution to it. I have made sure that there is no deadlock in the app i.e. multiple threads do not access the same OpenCV VideoCapture object, method.

Specs of the system:

1 Like

i Think you have to kill the process using pkill because the camera is working i hope it is work for you

The camera does work for me but ideally i do not want to kill the process when read op starts blocking

It’s hard to advise with so much details.
You may have 2 cv2.videoCaptures, each reading its own camera with such gstreamer pipelines, one for sensor0 and one for sensor1. Then if they both open fine, read one frame from each in capture loop or your capture thread and provide these to your inference thread.
You may have to perform some locking/buffering so that read frames don’t change while the inference thread is reading these.

I am able to successfully open the cv2 video capture objects for both sensors. Each object is opened and then “read” sequentially on a dedicated thread. I have a lock in place to get the frames from the inference thread. The "read"s happen for 7-8 mins after which the openCV “read” call blocks

Could you please make a sample based on this:

So that we can run to reproduce the issue and do investigation.

Here is an equivalent code that I tried and got the same blocking scenario. It doesn’t happen immediately but after 5-15 mins it stops logging and blocks at read indefinitely.

from threading import Thread
import time
import logging
import cv2

logger = logging.getLogger()
ch = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')

class VidCam:

    def __init__(self):
        self.caps = []
        self.running = True
        self.cap_thread = None

    def gstreamer_pipeline(self, _id):
        return (
            "nvarguscamerasrc sensor-id=%d ! "
            "video/x-raw(memory:NVMM), "
            "width=(int)%d, height=(int)%d, "
            "format=(string)NV12, framerate=(fraction)%d/1 ! "
            "nvvidconv flip-method=%d ! "
            "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
            "videoconvert ! "
            "video/x-raw, format=(string)BGR ! appsink" #drop=True max-buffers=5
            % (

    def cap_frame(self):
        self.running = True
        start_time = time.time()
        count = 0
        for cam_idx in range(2):
        while self.running:
            for cam_idx in range(2):
                if self.caps[cam_idx].isOpened():
                    ret,img = self.caps[cam_idx].read()
                    if not ret:
                        print("No return for camera %d" % (cam_idx))
                        self.running = False
                    print("Camera %d not opened" % (cam_idx))
            count += 1
            if (count % 1000) == 0:
            if time.time()-start_time > 600.0:
      "10 min mark, 1 min break")
                for cam_idx in range(2):
                for cam_idx in range(2):
                start_time = time.time()
        end_time = time.time()
        print("Captured %d frames in %.2f secs --> %.2f fps" % (count,

    def begin_capture(self):
        self.cap_thread = Thread(target=self.cap_frame,args=())

    def halt(self):
        self.running = False
        for cam_idx in range(2):

if __name__ == '__main__':
    cam = VidCam()
    print("Beginning capture...")
    print("Stopping capture")

@DaneLLL UPDATE: it doesn’t work/blocks when both cameras capture at 120 fps. However, it works when both cameras capture at 60 fps. While 60 fps capture works in the sample program above, in my inference application even that fails after an hour or so.

I also tried to change some gstreamer appsink properties such as “max-lateness”,“sync”,“name” thinking that it may resolve the issue, but the blocking persists.

Please try with sudo jetson_clocks or 1920x1080p30. It might be too heavy to run two high fps sources on Jetson Nano.

1 Like

It looks specific to hooking with OpenCV. We have tried the pipeline:

$ gst-launch-1.0 -v nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=120/1' ! nvvidconv flip-method=2 ! video/x-raw, width=1280, height=720, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink nvarguscamerasrc sensor-id=1 ! 'video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=120/1' ! nvvidconv flip-method=2 ! video/x-raw, width=1280, height=720, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink

It runs for 2+ hours. Don’t hit hang issue.

1 Like

It seems that the heavy load is causing the cameras to block. The sample program uses 180% cpu and the nvargus-daemon process using another 135%, which leads to blocking behavior. If I run the same python program with return width and height halved in the gstreamer pipeline, then the cpu usage of the python program reduces to 75% (and nvargus-daemon process using 135%) and the program runs indefinitely (I ran it for 24 hours straight). The gstreamer command-line provided by @DaneLLL above uses about 85% cpu in addition to nvargus-daemon process. How can I reduce the cpu usage? Will using C++ help? Apart from using gstreamer library directly, is there another way to capture videos for an application in the Jetson Nano that uses the nvidia driver?

In case opencv videoio would be involved in the issue, you may try to use jetson-utils instead. See this example.

1 Like