I have a Jetson Nano 2GB. I use it to recognize objects. I use the code from this page
Simple object tracking with OpenCV - PyImageSearch
Which can be seen here:
AIComputerVision/person_tracking.py at master · mailrocketsystems/AIComputerVision · GitHub
I run it with CUDA:
Everything works and starts well. My problem is that the FPS is very slow despite the fact that it is with the GPU. It runs between 3 to 2 FPS … and sometimes it stays loading for a long time (with a very small resize so that it loads faster, if I set it large it is extremely slow).
If I use the Jetson Inference it looks big and runs a little faster. What’s the difference?
How can I improve my code?
Another question: can I make the jetson nano inference code more optimized? Example: that it does not show the video when I run it, or that the video that I receive is smaller or of an exact size.
This file has been truncated.
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
The GPU performance depends on the way of implementation.
Since Jetson has integrated memory, not all the frameworks have the optimal solutions for it.
It’s more recommended to use
It uses TensorRT as the backend inference engine, which is more suitable for Jetson platforms.
I’m just going to try to use jetson-inference, but how do I implement TensorRT with opencv?
What happens is that my code is already working and if I use inference I have to add a lot of lines.
You need to check followings:
OpenCV supports CUDA
OpenCV is 4.2 or later.
Since 4.2, DNN backend of CUDA supported.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.