Incorrect inference in TensorRT compared to the Tensorflow inference

Description

I am using an SSD model for object detection. While doing inference using TensorRT, I am getting a wrong output shape. Can you please help here.

Environment

TensorRT Version: 8.0.1.6
GPU Type:
Nvidia Driver Version:
CUDA Version: 10.2.300
CUDNN Version: 8.2.1.32
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.5+nv21.12
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

  • I did the inference of the SSD model that I am using and when I do model.predict, I get an output shape of (1, 11692, 18) - please refer to the git.
  • When I converted the same model to tensorRT using the command,
    trtexec --onnx=ssd7keras_od.onnx --saveEngine=ssd7keras_od.trt --explicitBatch
    and did the inference, then I got an output shape of (1, 1000) - please refer to the git.

Hi,
We recommend you to check the below samples links in case of tf-trt integration issues.

If issue persist, We recommend you to reach out to Tensorflow forum.
Thanks!

I used the exact steps in the TensorRT quick start guide. Please find the reference below.

Hi,

Could you please try following,

If you still face this issue please share with us the ONNX model.

Thank you.