Using GPU with OpenCV on Jetson Xavier

Hi everyone,

I’m working on facial recognition algorithm using OpenCV and Tensorflow librairies. It’s running well on my computer and now I try to run it on my Jetson AGX Xavier. I successfully install opencv using this tutorial and Tensorflow using this one.

My code is running properly on the AGX Xavier, but the performance are the same comparing to my dev computer (who don’t have any dedicated GPU and less CPU power).

I’m pretty surprise because I would assumed that running OpenCV on the AGX Xavier automatically use the power of the GPU for video reading and analysis.

Currently, I can’t reach more than 10 fps while reading a 4k video (without any treatment). Using the jtop function show me that GPU usage during the video reading is near to 0 % (CPU is around 75%).

I’m pretty sure there is something I’m missing in my understanding. Is there a way to force OpenCV to use the GPU for image processing ? Or maybe it is about the AGX Xavier initialisation ?

Thanks for sharing knowledge !

First you need to build OpenCV with CUDA support, the OpenCV version delivered with JetPack has NO CUDA-support. Look around here in the forums or use this script by community member @mdegans.

Second, your OpenCV code will NOT automatically run on GPU after building OpenCV with CUDA-support, you will have to modify it to do so. To get an idea for what must be done (assuming you are using Python): https://github.com/opencv/opencv/blob/master/modules/python/test/test_cuda.py

Ok I checked my OpenCV compilation and it seems that it was correctly compiled with CUDA :

Sans titre

Do you confirm ?

Second, your OpenCV code will NOT automatically run on GPU after building OpenCV with CUDA-support, you will have to modify it to do so. To get an idea for what must be done (assuming you are using Python): https://github.com/opencv/opencv/blob/master/modules/python/test/test_cuda.py

Thanks I didn’t know that. I will investigate this part.

1 Like

looks good to me

Side note: Note sure but looking at the opencv build script you mentionned, it seems it sets CUDA arch to 7.0
For Xavier, I think it should be 7.2.
You may consider using the script that @dkreutz mentioned.

Thanks I didn’t know that. I will investigate this part.

Indeed. I made the modification in the CMake options before compiling, so it’s supposed to be good. Nevertheless I currently consider to reinstall OpenCV using the script mentioned by @dkreutz