TensorRT trtexec implementation of SSD_mobilenetv2 tensorflow to INT8 precision engine file

Description

Kindly give out the steps to create a general int8 ssdmobilenetv2 tensorflow engine and to benchmark it.(Preferabley using trtexec command)

Is it necessary to supply any additional calibration files during the above process when compared to fp32. If necessary can you mention the same.

Environment

TensorRT Version:7.0.11
GPU Type: T4
Nvidia Driver Version:440
CUDA Version: 10.2
CUDNN Version:
Operating System + Version:Ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi @GalibaSashi,
Please refer to the below post.

Thanks!

Hi @AakankshaS,

I have checked the same and it came to my attention that int8 calibratition support only for classification models. How can I use detection models like ssd_mobilenet_v2 , convert the same using trtexec and to create the calibration table required for the same in minimal steps.Kindly do help out.

Hi @AakankshaS @AastaLLL
Kindly help out.

Hi @AakankshaS,
I had benchmarked SSD_mobilenet_v2 model and I got strange results in trtexec. Can you help in verifying whether is correct??

Hi,
Apologies for delayed response, are you still facing the issue?