In this page (https://elinux.org/Jetson_Zoo), it’s possible to find various DNN models for inferencing on Jetson with support for TensorRT, including links to access the code. But, in special, the trt-yolo-app (https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/yolo/README.md#trt-yolo-app) is broke.
I have a trained model with Tiny Yolo and I’d like to use it in the Jetson Nano. When I try to use with the standard way, the FPS result is so far from the NVIDIA benchmarks results.
So, I’d like to know, if it’s possible, if someone has the step-by-step to run my Tiny Yolo model in Jetson Nano to run with 25 FPS (https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks)?
Hi Adriano, thanks - I have updated the link on the Jetson Zoo page. YOLO is now natively supported by DeepStream 4.0 - see this app note: https://docs.nvidia.com/metropolis/deepstream/Custom_YOLO_Model_in_the_DeepStream_YOLO_App.pdf
You can find the Tiny-YOLO v3 benchmarking instructions here: https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/
There is also a yolov3_onnx sample included with TensorRT at /usr/src/tensorrt/samples/python
By the way, the previous GitHub URL moved to: https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo
Hi, tks for the fast answer.
I will see all the documentation that you shared with me. By the way, this examples and documentations are in C, right? Do you have some documentation to python developers?