Tiny YOLO-v3 + Jetson Nano

Hi,

In this page (Jetson Zoo - eLinux.org), it’s possible to find various DNN models for inferencing on Jetson with support for TensorRT, including links to access the code. But, in special, the trt-yolo-app (https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/yolo/README.md#trt-yolo-app) is broke.
I have a trained model with Tiny Yolo and I’d like to use it in the Jetson Nano. When I try to use with the standard way, the FPS result is so far from the NVIDIA benchmarks results.
So, I’d like to know, if it’s possible, if someone has the step-by-step to run my Tiny Yolo model in Jetson Nano to run with 25 FPS (Jetson Benchmarks | NVIDIA Developer)?

Hi Adriano, thanks - I have updated the link on the Jetson Zoo page. YOLO is now natively supported by DeepStream 4.0 - see this app note: [url]NVIDIA Metropolis Documentation

You can find the Tiny-YOLO v3 benchmarking instructions here: [url]https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/[/url]

There is also a yolov3_onnx sample included with TensorRT at /usr/src/tensorrt/samples/python

By the way, the previous GitHub URL moved to: [url]https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo[/url]

Hi, tks for the fast answer.
I will see all the documentation that you shared with me. By the way, this examples and documentations are in C, right? Do you have some documentation to python developers?

Best regards.