Maximum Performance of SSD_mobilenet_v2 model for NVIDIA T4 in TensorRT using trtexec?


Can you give the maximum Performance of SSD_mobilenet_v2 model reported in TensorRT using trtexec??


TensorRT Version:
GPU Type: T4
Nvidia Driver Version: 440+
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: 18.04

Can you please some one will be able to provide any inputs/help regarding the same.

Hi @george_na,
Apologies for delayed response.

Can you elaborate maximum performance.
There are too many SKUs/models/configurations for us to maintain this kind of information.


Hi Aakankshas,
A general ssd_mobilenet_v2_coco_2018_03_29 model is used for measuring the maximum performance(throughput and latency) in TensorRT using trtexec.

Hi @AakankshaS , @AastaLLL
How to use ssd_mobilenet_v2
Can you give the maximum performance of the model used in this post using trtexec and that too in INT8.

Hi @george_na,
Are you just asking “Here is network X on SKU Y, how fast does it run?” Because we don’t keep that information except for a very few key networks, of which this isn’t one.


Hi AakankshaS,
Here I am asking the maximum throughput which we are getting for the same model in TensorRT.

Hi @george_na,
To get the throughput on your model, you can run trtexec in verbose mode and let us know if they face any issues