Extremely high memory leak with OpenCV video capture

I am running a Python script that captures and processes images from a USB camera using OpenCV 4.5.1 and V4L2 backend. The image is fed to a Tensorflow network. I noticed a very high memory leak during the execution. The relevant code is the following:

import cv2
cap = cv2.VideoCapture(0)
while True:
    ret, img = cap.read()
    other_process_queue.put(img) // the image is processed on a parallel process

According to the memory profiler, the instruction that causes the leak is cap.read(). Deleting the img variable did not fix the problem. What could be the cause for this?

It seems to be an issue in cv2.VideoCapture(0). Probably you can try to run gstreamer pipeline in cv2.VideoCapture() like:
How to Filesink and Appsink simultaneously in OpenCV Gstreamer. (Gstreamer pipeline included) - #7 by DaneLLL
See if it works in the case.

Is there a way to fix the issue by using the v4l2 backend?

We don’t see the issue in running gstreamer pipeline in cv2.VideoCapture(). would suggest try this method.

My AGX has opencv built without Gstreamer and it seems that I cannot open VideoCapture with the Gstreamer pipeline.

We would suggest use the OpenCV package installed through SDKManager. It enables gstreamer.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.