[TX1][python][OpenCV3.1.0]: how to use GPU to accelerate object detection in python?

I have built OpenCV3.1 with CUDA enabled, and it all works well. But I don’t know if my codes are accelerated with GPU, some documents mentioned that OpenCV3.1 is different from OpenCV2.4 that lib is already built with gpu accelerate.

So my questions are:
How can I know if my code is running on GPU?
Do I need special code to enable GPU/CUDA accelerate in python+OpenCV3.1?
How can I disable GPU accelerate if it is already enabled in lib? Build another openCV with CUDA=OFF?

Thanks.

How can I know if my code is running on GPU?
You can try to observe GPU statistics via below command.
$ sudo ./tegrastats
RAM 1204/3995MB (lfb 1x4MB) cpu [0%,0%,0%,0%]@1224 EMC 0%@1065 AVP 4%@80 NVDEC 268 MSENC 268 GR3D 0%@76 EDP limit 1734

Thanks Vicky and Merry Christmas.

I think tegrastats is still an indirect way to know if the app is running on gpu. Because I can see some points at about 50%~70%, and it is same when the system is in idle.

So far, I don’t have a method to make sure the python code is running on GPU, but only to use python wrapper (C++ code for GPU). Any suggestion?

Jerry,

You can use NVIDIA Visual Profiler (https://developer.nvidia.com/nvidia-visual-profiler) to analyze the app.