Hello Experts,
It would be great to have some benchmarking information on good moedel for Face recognition/Comparision done on the Jetson Nano platform.
If Deepstream is used for live video analytics, what are all settings in gstreamer and other pipelines need to be configured ?
Hi,
We don’t have a face specified benchmark for Nano but you can find some general score here:
https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks
For Deepstream, please find the detail information in our document:
Thanks.
Hi @AastaLLL
Thanks for the benchmark details. Anyways the procedure given seems using the TensorRT as follows.
Hi all, below you will find the procedures to run the Jetson Nano deep learning inferencing benchmarks from this blog post with TensorRT.
note: for updated JetPack 4.4 benchmarks, please use github.com/NVIDIA-AI-IOT/jetson_benchmarks
[image]
While using one of the recommended power supplies , make sure you Nano is in 10W performance mode (which is the default mode):
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
Using other lower-capacity power supplies may lead to system instabilities or shutd…
Is it possible to run the same using opencv. Would like to understand the type of port given to opencv to load CNN models.
Hi,
The benchmark only calculates the inference time of TensorRT.
The input image is random data rather than an decoded image.
To link OpenCV for inference, you can just fill the buffer from OpenCV image (ex.cudamemcpy ).
The type should be float32 and the format is RGB or BGR depends on the model ifself.
Thanks.