What is difference these two models

I am using SSD_Mobilenet_v2_coco Network in TensorRT.

What is difference these two situations.
1. Jetson Inference
2. Jetson Nano Benchmarks

Using Jetson Nano,
In Jetson Inference example, got about 25 fps.
In Jetson Nano Benchmarks example, got almost 39 fps.

I compared two examples,
It was same everything(input size, precision, inference code … etc).

But, one is different.
That is model file(.uff). → File size is different.

Why these two model file is different?
It is same network(ssd_mobilenet_v2_coco).

Please let me know if you have used any method to make the benchmark result better.

Moving to Jetson forum so that Jetson team can take a look.

1 Like

Hi @sangjoon.hong, the one model was pre-trained on the 37-class Oxford-IIIT Pets Dataset, while the model from jetson-inference was trained on the 90-class MS COCO dataset.

1 Like

Thank you ver much!! :)