Speed of YOLOv3-lite on Jetson TX2

Hi, I’ve designed a YOLOv3 model based on original yolov3-lite with caffe(Thanks for the great work of eric [url]https://github.com/eric612/MobileNet-YOLO.git[/url]). When I test my model on Jetson TX2 with yolo_detect provided in sdd_detect in a webcam. The inference time of each image was about 40ms. But the final frame per second was about 6-7. I did not konw exactly how to accelerate the speed in other part of the whole detection(image preprocess and postprocess).Thanks so much!

Hi,

Have you compiled your model into a TensorRT PLAN?
[url]https://developer.nvidia.com/tensorrt[/url]

If yes, it’s recommended to run your model with DeepStream SDK:
[url]https://developer.nvidia.com/deepstream-sdk[/url]

We have optimized the whole pipeline from camera to display.
It should be able to give you a much better performance.

Thanks.