Can you give the maximum Performance of SSD_mobilenet_v2 model reported in TensorRT using trtexec??
TensorRT Version: 220.127.116.11
GPU Type: T4
Nvidia Driver Version: 440+
CUDA Version: 10.2
Operating System + Version: 18.04
Can you please some one will be able to provide any inputs/help regarding the same.
Apologies for delayed response.
Can you elaborate maximum performance.
There are too many SKUs/models/configurations for us to maintain this kind of information.
A general ssd_mobilenet_v2_coco_2018_03_29 model is used for measuring the maximum performance(throughput and latency) in TensorRT using trtexec.
Hi @AakankshaS , @AastaLLL
How to use ssd_mobilenet_v2
Can you give the maximum performance of the model used in this post using trtexec and that too in INT8.
Are you just asking “Here is network X on SKU Y, how fast does it run?” Because we don’t keep that information except for a very few key networks, of which this isn’t one.
Here I am asking the maximum throughput which we are getting for the same model in TensorRT.
To get the throughput on your model, you can run trtexec in verbose mode and let us know if they face any issues