Jetson Nano faster for object recognition with GPU

I have a Jetson Nano 2GB. I use it to recognize objects. I use the code from this page Simple object tracking with OpenCV - PyImageSearch

Which can be seen here: AIComputerVision/ at master · mailrocketsystems/AIComputerVision · GitHub

I run it with CUDA:

Everything works and starts well. My problem is that the FPS is very slow despite the fact that it is with the GPU. It runs between 3 to 2 FPS … and sometimes it stays loading for a long time (with a very small resize so that it loads faster, if I set it large it is extremely slow).

If I use the Jetson Inference it looks big and runs a little faster. What’s the difference?

How can I improve my code?

Another question: can I make the jetson nano inference code more optimized? Example: that it does not show the video when I run it, or that the video that I receive is smaller or of an exact size.


The GPU performance depends on the way of implementation.
Since Jetson has integrated memory, not all the frameworks have the optimal solutions for it.

It’s more recommended to use jetson-inference.
It uses TensorRT as the backend inference engine, which is more suitable for Jetson platforms.


I’m just going to try to use jetson-inference, but how do I implement TensorRT with opencv?

What happens is that my code is already working and if I use inference I have to add a lot of lines.

You need to check followings:

  • OpenCV supports CUDA
  • OpenCV is 4.2 or later.
    Since 4.2, DNN backend of CUDA supported.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.