I’m using a Jetson Nano to run some object detection code which is a combination of some of the original YOLO Darknet code and a TensorRT version of the YOLO model. This was running with no problems with Jetpack 4.2.1 plus OpenCV 4.1.2 built from source. I recently decided to try upgrading to JP 4.3 and JP 4.4 DP and discovered that, when running object detection on a video, all 4G memory on the nano would be consumed within 2-3 mins, causing it to grind to a halt. Experimentation showed that this happened even if I didn’t load the model or run inference and I’ve discovered that the memory leak is in OpenCV. I think it might be related to memory management in the Mat class.
The memory leak occurs in the following cases:
- using a JP 4.3 image (which includes openCV 4.1.1)
- building L4T 32.4.2 from source and then installing JP 4.4 (including OpenCV 4.1.1) using apt-get
- uninstalling openCV from the above JP 4.4 installation and building OpenCV 4.1.1 from source (with or without CUDA support)
The memory leak does not occur in the following cases:
- JP 4.2.1 plus OpenCV 4.1.2 built from source
- JP 4.4 plus OpenCV 4.1.2 built from source
I realise that a memory leak in OpenCV is not the responsibility of the Nvidia team but I wanted to post this here to help anyone else who is having the same problem and in the hope that Nvidia will consider moving to a different OpenCV version in Jetpack.
Unfortunately I haven’t been able to come up with a minimal code example that reproduces the problem. I currently have a cut down version of my full application that exhibits the memory leak and a test application that I believe has all the relevant features of my full application and yet doesn’t cause the memory leak. Apologies as I realise that this post would be a lot more useful if I could provide code to reproduce the problem.