YoloV3 in pretrained Detection Models Available

We do have a YoloV3 sample inside


The model converts

yolov3 -> ONNX -> Tensorrt

I ran a pre recorded video, using


Inference speed was very less.

Locating Object Coordinates using DetectNet on jetson Nano

We have Pretrained Detection Models Available

Here models run at 17FPS-19FPS

I am interested in detecting person in an image, found a recent published paper

A Comparison of Embedded Deep Learning Methods for Person Detection


Experiments results shows that 
Tiny YOLO416 and SSD (VGG-300) are among the fastest models and 
Faster RCNN (Inception ResNet-v2) and RFCN (ResNet-101) are the most accurate ones.
How-ever, neither of these models nail the tradeoff between speed and accuracy. 
Further analysis indicates that YOLO v3-416 delivers relatively accurate result in
reasonable amount of time, which makes it a desirable model for person detection 
in embedded platforms.

Why don’t we have a YoloV3 trained model in the Pretrained Detection Models?

Could I get any guidance as to how I can implement YoloV3 for faster inference than the current Pretrained Detection Models Availablehttps://github.com/dusty-nv/jetson-inference/blob/python/docs/detectnet-console-2.md#user-content-pretrained-detection-models-available?


There are lots of object detection models cross different frameworks.
It’s hard for us to include all the possibility.
Jetson_inferece focus on the pretrained model of DetectNet.

But you can get YOLOv3 directly from the author’s GitHub.

We also have a tutorial for YOLO implementation.
You can check this sample for more information:



Am facing by page not found error by following the second link, any update on it?

what does it mean “Yolo has been removed since Yolo is natively supported from Deepstream 4.0”

Would you please help me how to use yolov3 on Jetson Xavier?

I shared code example for optimizing yolov3 with TensorRT and running inference on Jetson platforms. It has been tested on both Jetson Nano and Jetson AGX Xavier. Feel free to take a look.