OpenCV ops blocking for multi-camera capture with Gstreamer

Hello, I have a Python application that does object detection on videos from 2 CSI cameras attached to the Jetson Nano dev board. On one thread, the app captures frames from the 2 cameras (sequentially) with the following gstreamer pipeline

nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)120/1 ! nvvidconv flip-method=2 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink drop=true max-buffers=5

and another thread reads the frames and does the inference. The app has several inactive moments during which it releases the VideoCapture objects and reinitializes it when required. After sometime of active use the calls to OpenCV VideoCaptures read, open, release block. The CPU usage is usually high at the time. This occurs only sometimes. If I restart the application, then I get

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:543 Failed to create CaptureSession

which gets resolved if I restart the nvargus-daemon system service. I am unable to understand why and how this happens and if there is a solution to it. I have made sure that there is no deadlock in the app i.e. multiple threads do not access the same OpenCV VideoCapture object, method.

Specs of the system:

  • Raspberry Pi Camera v2
  • Jetpack 4.3
  • OpenCV 4.1.1 compiled from src using script in GitHub - AastaNV/JEP
  • Dev Board B01
  • Gstreamer 1.14.5
1 Like

i Think you have to kill the process using pkill because the camera is working i hope it is work for you

The camera does work for me but ideally i do not want to kill the process when read op starts blocking

It’s hard to advise with so much details.
You may have 2 cv2.videoCaptures, each reading its own camera with such gstreamer pipelines, one for sensor0 and one for sensor1. Then if they both open fine, read one frame from each in capture loop or your capture thread and provide these to your inference thread.
You may have to perform some locking/buffering so that read frames don’t change while the inference thread is reading these.

I am able to successfully open the cv2 video capture objects for both sensors. Each object is opened and then “read” sequentially on a dedicated thread. I have a lock in place to get the frames from the inference thread. The "read"s happen for 7-8 mins after which the openCV “read” call blocks

Hi,
Could you please make a sample based on this:

So that we can run to reproduce the issue and do investigation.

Here is an equivalent code that I tried and got the same blocking scenario. It doesn’t happen immediately but after 5-15 mins it stops logging and blocks at read indefinitely.

from threading import Thread
import time
import logging
import cv2

logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)

class VidCam:

    def __init__(self):
        self.caps = []
        self.running = True
        self.cap_thread = None

    def gstreamer_pipeline(self, _id):
        return (
            "nvarguscamerasrc sensor-id=%d ! "
            "video/x-raw(memory:NVMM), "
            "width=(int)%d, height=(int)%d, "
            "format=(string)NV12, framerate=(fraction)%d/1 ! "
            "nvvidconv flip-method=%d ! "
            "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
            "videoconvert ! "
            "video/x-raw, format=(string)BGR ! appsink" #drop=True max-buffers=5
            % (
                _id,
                1280,
                720,
                120,
                2,
                1280,
                720,
            )
        )

    def cap_frame(self):
        self.running = True
        start_time = time.time()
        count = 0
        for cam_idx in range(2):
            self.caps.append(cv2.VideoCapture(self.gstreamer_pipeline(cam_idx),
                    cv2.CAP_GSTREAMER))
        while self.running:
            for cam_idx in range(2):
                if self.caps[cam_idx].isOpened():
                    ret,img = self.caps[cam_idx].read()
                    if not ret:
                        print("No return for camera %d" % (cam_idx))
                        self.running = False
                        break
                else:
                    print("Camera %d not opened" % (cam_idx))
            count += 1
            if (count % 1000) == 0:
                logger.debug("Pitstop")
            if time.time()-start_time > 600.0:
                logger.info("10 min mark, 1 min break")
                for cam_idx in range(2):
                    self.caps[cam_idx].release()
                time.sleep(60)
                for cam_idx in range(2):
                    self.caps[cam_idx].open(self.gstreamer_pipeline(cam_idx),
                    cv2.CAP_GSTREAMER)
                start_time = time.time()
        end_time = time.time()
        print("Captured %d frames in %.2f secs --> %.2f fps" % (count,
            end_time-start_time,count/(end_time-start_time)))


    def begin_capture(self):
        self.cap_thread = Thread(target=self.cap_frame,args=())
        self.cap_thread.start()

    def halt(self):
        self.running = False
        self.cap_thread.join()
        for cam_idx in range(2):
        	self.caps[cam_idx].release()

if __name__ == '__main__':
    cam = VidCam()
    print("Beginning capture...")
    cam.begin_capture()
    time.sleep(3600)
    print("Stopping capture")
    cam.halt()

@DaneLLL UPDATE: it doesn’t work/blocks when both cameras capture at 120 fps. However, it works when both cameras capture at 60 fps. While 60 fps capture works in the sample program above, in my inference application even that fails after an hour or so.

I also tried to change some gstreamer appsink properties such as “max-lateness”,“sync”,“name” thinking that it may resolve the issue, but the blocking persists.

Hi,
Please try with sudo jetson_clocks or 1920x1080p30. It might be too heavy to run two high fps sources on Jetson Nano.

1 Like

Hi,
It looks specific to hooking with OpenCV. We have tried the pipeline:

$ gst-launch-1.0 -v nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=120/1' ! nvvidconv flip-method=2 ! video/x-raw, width=1280, height=720, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink nvarguscamerasrc sensor-id=1 ! 'video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=120/1' ! nvvidconv flip-method=2 ! video/x-raw, width=1280, height=720, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink

It runs for 2+ hours. Don’t hit hang issue.

1 Like

It seems that the heavy load is causing the cameras to block. The sample program uses 180% cpu and the nvargus-daemon process using another 135%, which leads to blocking behavior. If I run the same python program with return width and height halved in the gstreamer pipeline, then the cpu usage of the python program reduces to 75% (and nvargus-daemon process using 135%) and the program runs indefinitely (I ran it for 24 hours straight). The gstreamer command-line provided by @DaneLLL above uses about 85% cpu in addition to nvargus-daemon process. How can I reduce the cpu usage? Will using C++ help? Apart from using gstreamer library directly, is there another way to capture videos for an application in the Jetson Nano that uses the nvidia driver?

In case opencv videoio would be involved in the issue, you may try to use jetson-utils instead. See this example.

1 Like

Follow up, I still haven’t been able to resolve this issue unless I use gst-launch-1.0. As per @Honey_Patouceul suggestion, I tried jetson-utils and still faced hanging issues, which I described here: gstreamer stalling (part 2) · Issue #47 · dusty-nv/jetson-utils · GitHub

I am also working with the same problem.

I have also tried running this pipeline, but it runs for only a couple of seconds before one of the sinks stop working, while the other sink continues to work:


/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 1738, dropped: 0, current: 98,54, average: 101,17
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink1: last-message = rendered: 1789, dropped: 0, current: 101,26, average: 101,34
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 1788, dropped: 0, current: 99,08, average: 101,11
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink1: last-message = rendered: 1838, dropped: 0, current: 96,47, average: 101,20
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 1839, dropped: 0, current: 100,17, average: 101,09
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink1: last-message = rendered: 1889, dropped: 0, current: 101,55, average: 101,21
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 1889, dropped: 0, current: 97,47, average: 100,99
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 1948, dropped: 0, current: 117,94, average: 101,43
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 2010, dropped: 0, current: 123,88, average: 102,00
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 2069, dropped: 0, current: 116,42, average: 102,36
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 2126, dropped: 0, current: 112,76, average: 102,61
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 2180, dropped: 0, current: 105,97, average: 102,69
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 2238, dropped: 0, current: 114,31, average: 102,97

Where we can see that fpsdisplaysink1 stopped.

And in the logs I can see many errors:


2021-03-26T09:47:01.416248-03:00 art-jetson nvargus-daemon[5245]: SCF: Error InvalidState: Session has suffered a critical failure (in src/api/Session.cpp, function capture(), line 667)
2021-03-26T09:47:01.416284-03:00 art-jetson nvargus-daemon[5245]: (Argus) Error InvalidState:  (propagating from src/api/ScfCaptureThread.cpp, function run(), line 109)
2021-03-26T09:47:01.416319-03:00 art-jetson nvargus-daemon[5245]: SCF: Error InvalidState: Session has suffered a critical failure (in src/api/Session.cpp, function capture(), line 667)
2021-03-26T09:47:01.416355-03:00 art-jetson nvargus-daemon[5245]: (Argus) Error InvalidState:  (propagating from src/api/ScfCaptureThread.cpp, function run(), line 109)
2021-03-26T09:47:01.416389-03:00 art-jetson nvargus-daemon[5245]: SCF: Error InvalidState: Session has suffered a critical failure (in src/api/Session.cpp, function capture(), line 667)

I tried running the same pipeline with 1920x1080@60fps and it also hang after a few seconds, then I tried with 1280x720@60fps, which ran for about 2 hours before hanging.

I am running # R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t210ref, EABI: aarch64, DATE: Fri Oct 16 19:44:43 UTC 2020

This is not an answer, but not sure 120 fps without a jetson partner is the way for future.
In R32.5, IMX219 only provides 60 fps modes (at least in my case, running R32.5.1 on NX).
I may be wrong, someone from NVIDIA may clarify.

Hi,
@feupos please start a new topic for your issue. It may be specific to your sensors since we have tried two Pi camera V2 in 720p120 in *gst-launch-1.0 command, and it runs fine.

Could you please retry this as it seems that for me too on both jetson nano and jetson xavier nx (with no gui, max power and fan mode) this command stops displaying one of the sinks as @feupos mentions?

Hi,
We have removed 120fps sensor mode in Pi camera V2. Thanks for Honey_Patouceul’s reminder. Will again set up a lon run test to launch two Pi camera V2 in gst-launch command.

Hi,
On Jetpack 4.5.1(r32.5.1), Jetson Nano + 2 Pi camera V2, we can run the script for 17+ hours:

import sys
import cv2

def read_cam():
    cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1280, height=720,framerate=60/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink ")
    cap1 = cv2.VideoCapture("nvarguscamerasrc sensor-id=1 ! video/x-raw(memory:NVMM), width=1280, height=720,framerate=60/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink ")
    if cap.isOpened() and cap1.isOpened():
        cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
        cv2.namedWindow("demo1", cv2.WINDOW_AUTOSIZE)
        while True:
            ret_val, img = cap.read();
            ret_1, img1 = cap1.read();
            cv2.imshow('demo',img)
            cv2.imshow('demo1',img1)
            cv2.waitKey(10)
    else:
     print("camera open failed")

    cv2.destroyAllWindows()


if __name__ == '__main__':
    read_cam()

FYR.