Description
I am trying to do tensorrt inference on yolov4 model. I have successfully converted the model to onnx and I was also able to build tenssort engine successfully. However the output shape of the yolov4 model is completely dynamic [None, None, None]
. I am getting different output shapes from tensorrt and tensorflow. The tensorflow outputs [1, None, 84]
(I have put the second element None because it’s the only element that changes for different input). However, I always get [10647]
as the output shape when I run tensorrt inference. This can never be reshaped into [1, None, 84]
. So I hope this is because the output shape is dynamic and I think I need to set it somehow. So how can I set this (either while building onnx or tensorrt engine)?
Environment
TensorRT Version: 8.2
GPU Type: RTX 3070
Nvidia Driver Version:
CUDA Version: 11.4
CUDNN Version:
Operating System + Version: POP OS 20.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2.5.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered