How to set the output shape of the onnx model / tensorrt engine for tensorrt inference?

Description

I am trying to do tensorrt inference on yolov4 model. I have successfully converted the model to onnx and I was also able to build tenssort engine successfully. However the output shape of the yolov4 model is completely dynamic [None, None, None]. I am getting different output shapes from tensorrt and tensorflow. The tensorflow outputs [1, None, 84] (I have put the second element None because it’s the only element that changes for different input). However, I always get [10647] as the output shape when I run tensorrt inference. This can never be reshaped into [1, None, 84]. So I hope this is because the output shape is dynamic and I think I need to set it somehow. So how can I set this (either while building onnx or tensorrt engine)?

Environment

TensorRT Version: 8.2
GPU Type: RTX 3070
Nvidia Driver Version:
CUDA Version: 11.4
CUDNN Version:
Operating System + Version: POP OS 20.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2.5.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hello, please find the script file in the attachment. Not able to upload the onnx file as it is more than 100mb. so sharing the google drive link for the onnx file.
yolov4-416.onnx

engine.py (1.5 KB)
engine_ops.py (893 Bytes)
inference.py (1.7 KB)
model_inference.py (654 Bytes)
detect.py (5.1 KB)
detect_trt.py (3.8 KB)

Hi,

Please refer following Yolo TensorRT inference sample. Which may help you.

Thank you.