We do have a YoloV3 sample inside
/usr/src/tensorrt/samples/python/yolov3_onnx
The model converts
yolov3 -> ONNX -> Tensorrt
I ran a pre recorded video, using
onnx_to_tensorrt.py
Inference speed was very less.
Locating Object Coordinates using DetectNet on jetson Nano
https://github.com/dusty-nv/jetson-inference/blob/python/docs/detectnet-console-2.md
We have Pretrained Detection Models Available
https://github.com/dusty-nv/jetson-inference/blob/python/docs/detectnet-console-2.md#user-content-pretrained-detection-models-available
Here models run at 17FPS-19FPS
I am interested in detecting person in an image, found a recent published paper
A Comparison of Embedded Deep Learning Methods for Person Detection
https://arxiv.org/pdf/1812.03451.pdf
Conclusion
Experiments results shows that
Tiny YOLO416 and SSD (VGG-300) are among the fastest models and
Faster RCNN (Inception ResNet-v2) and RFCN (ResNet-101) are the most accurate ones.
How-ever, neither of these models nail the tradeoff between speed and accuracy.
Further analysis indicates that YOLO v3-416 delivers relatively accurate result in
reasonable amount of time, which makes it a desirable model for person detection
in embedded platforms.
Why don’t we have a YoloV3 trained model in the Pretrained Detection Models?
Could I get any guidance as to how I can implement YoloV3 for faster inference than the current Pretrained Detection Models Availablehttps://github.com/dusty-nv/jetson-inference/blob/python/docs/detectnet-console-2.md#user-content-pretrained-detection-models-available?