I recently performed opencv 4 face detection using DNN model res10_300x300_ssd_iter_140000.caffemodel and found that it managed terrible performance 1 frame/5 seconds at its best
Can you please suggest a solution to improve the frame rate or does Nvidia provides any tested face detection models like you do for object detection?
Do you use cap() to read camera frame?
OpenCV by default use ffmpeg to read/write camera.
Since FFmpeg is a CPU implementation on Jetson, the performance won’t be good.
The Facenet provided by jetson-inference does not outperform OpenCV_dnn. Is there no better way? Is it necessary to use DIGIST to learn Facenet in order to directly learn Facenet?
Nvidia Please respond, bcs the only reason for me to use Jetson nano is its image processing performance but looks like its not even supporting opencv, please help or recommend a better approach using opencv as most of the stuff is based on opencv
Thank you. I will turn your source on my jetson nano
(args) Error timeout: can not create camera provider (in src / rpc / socket / client / socketClientDispatch.cpp, Could you tell me the cause of this error?
For camera use case, it’s not recommended to use third-party library, ex. openCV.
Please check our MMAPI or Deepstream(available in Q2) for a better performance.
not as fast as the demo for the jetson (prob because that’s written in C).
on a macbook, the above code webcam runs the face detection at 12 fps (but does not use gstreamer) so i’m not surprised it’s abit choppy on the tegra.
OpenCV is supported and worked on Jetson.
But if you want a high-performance multimedia solution, it’s recommended to use MMAPI or Deepstream.
OpenCV is widely used and have lots of vision based features.
So we add it into the default package to ease user’s overhead.
But we also have some multimedia solution and we do a lot to optimize it on the Jetson platform.
This is why we recommend using our solution rather than OpenCV.
I found that NVIDIA has optimized the OpenCV, but when I started Nano with SD card system, How do I use optimized OpenCV?
I built and installed OpenCV manually as so far and it really took a long time.
Provides OpenCV that NVIDIA optimized specifically for the Tegra platform. It contains Tegra-specific optimizations that enable it to run faster than the OpenCV implementation.
Both should be installed side by side. The Nvidia provided OpenCV is installed in /usr (/usr/bin for executables, /usr/include for headers, /usr/lib for libraries, etc). All build scripts I’ve seen posted here, by default, configure the prefix to be /usr/local (so /usr/local/bin, /usr/local/include, /usr/local/lib, etc).
Are you looking to use OpenCV with Python or C++ (or some other bindings)?
"OpenCV is the leading open source library for computer vision, image processing and machine learning, and now features GPU acceleration for real-time operation."
It is true, though. OpenCV has CUDA acceleration support for some things, but not everything is GPU accelerated. and most algorithms run purely on the CPU. Not all functions have GPU versions and the conversion is not automatic. In other words; a program written for CPU will have to be rewritten before it uses the GPU.
A macbook CPU is much more powerful than the Nano’s for sure, but the Nano’s CPU is not supposed to be powerful. It only needs to be powerful enough to keep up with the GPU and that it does well. You offload what you can onto the GPU to make things go fast.