Trying out an opencv sample app, confused about performance


First - apologies if this isn’t the right forum. If this is better posted elsewhere, I’m happy to move it.
I’m currently trying to get my first OpenCV application up and running on the TX2 with a decent framerate.

Here is my current build info for OpenCV.
I used the JetsonHacks OpenCV repo to install the latest OpenCV (primarily for gstreamer and OpenGL support, but happy to revert back to stock if that is the solution.

I am trying to build this OpenCV example application:
slightly modified to always use /dev/video1 (usb camera) regardless of command line params.

Building using this command

`nvcc  `pkg-config --cflags opencv` `pkg-config --libs opencv``

When I run the compiled program, the 4 arm cores are 100% utilized and detection time is about 240ms. I guess, naively, I was hoping for a decent FPS but that doesn’t seem to be the case.

My end goal, way down the line is to use the TX2 for a wearable project, using a head mounted display (VuFine in my case) and a camera attached to myself to be able to analyze the world around me In Real Time ™. So, the head display will stream live 720 video fro the mounted camera and have a few options, to start, at things it can do. For instance, I think it’d be cool to be able to look at a QC code and have the HMD tell me what it is. Or do facial recognition and pop up facts/info about people. Something like that.

My first question is really, what am I doing wrong? Is there some step I’m missing to improve the detection time? Is there anyway I can confirm anything is running on the GPU?

Thanks in advance for any help!

Digging into it further, it looks like OpenCV optimizations are not free. I should personally write the code that pushes the image and does any computation on the gpu, according to this:

update looks like opencv/gpu/gpu.hpp is not available on my current jetson tx2.
update 2 looks like gpu.hpp is for opencv 2.x, and i’m on 3.4. so we’re getting closer.