I’ve been trying to get OpenCV up and running on my Jetson Nano developer kit with CUDA support. I used a Bash script I found on GitHub at GitHub - mdegans/nano_build_opencv: Build OpenCV on Nvidia Jetson Nano
I’m using JetPack 4.6.4 and OpenCV 4.1.1
The problem is, I keep running into the same error every time I run the script, despite following the same steps that seem to work perfectly in all the YouTube videos I’ve watched. Any help or advice on how to fix this would be greatly appreciated!
buildOpenCV.log (51.6 KB)
Do you need OpenGL support?
If not, you can turn it off and try it again.
The offset for OpenGL support has been successfully implemented. However, there has been no improvement in the program itself. I’m attempting to design a real-time image recognition for the Nano, and while the program runs smoothly on the computer, it’s extremely slow on the Nano (possibly around 1fps). According to the Jetson-stats library, CUDA support for OpenCV is running on version 10.2 and OpenCV 4.5.4, but the results don’t align with this information. I’m genuinely at a loss and starting to feel that the Nano might be too underpowered for this task. The recognition is done using YOLOv8 and a custom-trained dataset. I hope this fits into the thread, and I appreciate any assistance.
Hi @abrunner97, you could try running YOLO with TensorRT, it has more optimized performance:
The issue you may encounter is that YOLOv8 uses newer Python/CUDA than that comes with JetPack 4. You can absolutely do realtime object detection on Nano though with TensorRT, like SSD-Mobilenet @ 30FPS, previous YOLO variants, ect.
For the OpenCV part, you will need to check if you are actually using the version of OpenCV with the CUDA acceleration. You can do this in Python3 with:
>>> import cv2
If this information does not match with your expectation (check the CUDA part), you will need to set your PYTHONPATH appropriately.
Also, even when OpenCV is compiled with CUDA support, that does not mean that the particular functions you are calling are accelerated. Only a subset of these functions are CUDA accelerated. With that said, I would listen to dusty_nv.