I havnât tried it for a while, it might be that they have changed the libraries in newer version⊠I would recommand using aasta mobilenet ssd which is faster and better.
@AastaLLL, Jetson Nano does not support INT8. I think itâs highly unlikely that you can get 20 FPS when running TensorRT optimized yolov3-416 on Jetson Nano. In comparison, NVIDIAâs previous announcement said that tiny-yolov3 (416x416) ran at 25 FPS on Jetson Nano. Inference-speed-wise, yolov3-416 could be 6~7 times slower than tiny-yolo-416 (reference: YOLO: Real-Time Object Detection).
@afiqlcmec My implementation follows the NVIDIAâs original sample code: Object Detection With The ONNX TensorRT Backend In Python. It serializes the optimized TensorRT engine into a file. I guess (I havenât tested it myself) it cannot be used by DeepStream directly.
I am running yolov3 on my jetson nano in order to do object (persons) detection. The detection time is arround 7 seconds in total and i would like to accelerate it.
I try to do the steps mentioned in the git page, but the step-by-step manual is not for beginners ;-)
E.g. i cannot find the file CMakeLists.txt and i also donât have any idea in which step it should be created.