GPU acceleration for OpenCV ?

Hi,

does Jetson Nano support GPU acceleration for OpenCV using CUDA ?

According to Nvidia documentation the answer is yes, which is the best kind of technically correct.

OpenCV installed by default on nano does not have CUDA built in. You can see with print(cv2.getBuildInformaton()) in Python or (or cv2::getBuildInformation() in C/c++ probably. Almost all the fun build options are off.

What OpenCV does have is gstreamer support built in and Nvidia has been pushing that like it’s the end all because it’s pretty much the only way to do accelerated video decode on Nano. OpenCV is not supported on Nano because… we’ll, it’s Nvidia and they don’t want to support a competing tech.

You can compile OpenCV with CUDA yourself on Nano but i don’t think everything works. When I ran tests, some things were failing, including sanity checks on cuda plugins, which is prob why it wasn’t built with CUDA. YMMV, feel free to use one of the multiple scripts for nano to build OpenCV from source. Some things may work. Some things may not.

1 Like

I don’t wanna do deep learning on it, I just want opencv and Eigen to do machine vision like visual SLAM or RGBD obstacle avoidance but I don’t to spend thousand euro on a TX2…
I hope OpenCV acceleration will be fully supported in the future.

You may want to have a look at this:

The JetBot project also has some simple obstacle avoidance code you can look at. It’s not exactly what you may be looking for, but it might suffice.

There is also this, but it requires a RealSense camera and uses librealsense instead of OpenCv:

That’s a good hardware solution that people have tested for the problem. From what I understand the bulk of the work is done on the camera itself.

I will attend the webinar on Isaac the 30th of may ^^
I already know the T265, it’s a really great product in term of optimization. But it’s not really indicate for large scale application or developpement due to its limited internal memory (the size of the map depend on the complexity of the scene and the size of it (less than a house)); one possible solution for using the T265 outdoor for example, is to perform local map stiching on an other computer, but i don’t think the current implementation can handle that.

I would like to build a lightweight ground platform, with my D435 and a Jetson Nano. I hope opencv + GPU will soon be fully functionnal on the nano…

Apparently librealsense supports your camera, even though it’s rgbd. It won’t figure out it’s own position I guess but it’s a start. They have example code for almost every language. You can start from there and see how far it gets you. Road following might not be too hard even with a monocular camera. You can see some example code on how to accomplish this that here:

“OpenCV For Lane Detection in Self Driving Cars” by Galen Ballew https://link.medium.com/d4nXZ44bXW

He uses numpy and OpenCV but the basic idea presented is portable to near anything.

It also depends on what you mean with GPU acceleration for opencv.

AFAIK, there is no automatic translation. With C++ code, you would use the cv::cuda namespace available functions with cv::cuda::GpuMat instead of cv::Mat on CPU. Be aware that only a subset of opencv features is available on cuda.

Furthermore, AFAIK, most of cuda support is intended to discrete GPUs, and memory concerns with iGPU on Jetson may lead to further optimizations that are not automatic.

For python opencv, I cannot say, but I’m unsure there is any GPU support.

1 Like

Nvidia Jetson Nano comes with python 2.7 and python 3.6.9 with openCV 4.1.1
I checked print(cv.getBuildInformation), and it doesnt show gpu support. Should I uninstall the in-built OpenCV and build fresh or is it possible to enable the CUDA with the inbuilt OpenCV?

Yes, you do have to rebuild it for GPU support. On 4.4 GA, with this script, you can just do build_opencv.sh. I don’t know why a version with GPU support isn’t included with JetPack, but I appreciate the stars.

Re: optimizations, it not going to be as fast as Nvidia’s dedicated solutions. As @Honey_Patouceul mentions, most code will have to be rewritten to use the GPU, and mixing GPU stuff with CPU stuff may actually hurt performance.

2 Likes

So what would the best alternative be to do CUDA? Or are you saying that instead of using the cuda modules in OpenCV, it would be better to use your own implementation of those functions in a .cu file?