I am using SSD_Mobilenet_v2_coco Network in TensorRT.
What is difference these two situations.
1. Jetson Inference
2. Jetson Nano Benchmarks
Using Jetson Nano,
In Jetson Inference example, got about 25 fps.
In Jetson Nano Benchmarks example, got almost 39 fps.
I compared two examples,
It was same everything(input size, precision, inference code … etc).
But, one is different.
That is model file(.uff). → File size is different.
Why these two model file is different?
It is same network(ssd_mobilenet_v2_coco).
Please let me know if you have used any method to make the benchmark result better.