CUDNN_STATUS_BAD_PARAM when infer with dynamic shape

Description

Please check this github issue. No one respond me there so I’m asking again here. CUDNN_STATUS_BAD_PARAM when infer with dynamic shape · Issue #1281 · NVIDIA/TensorRT · GitHub

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @ysyyork,

We recommend you to please share complete error logs and issue reproducible model/scripts for better assistance.

Thank you.

I shared everything in the github link. I have been reposting this for 3 times…

Hello, is there any updates on this?

@ysyyork,

In github link you’ve shared, we are unable to find ONNX model and complete error logs.

Looks like you shared TRT engine, We need to build the engine on machine which we want to run inference. This is because TensorRT optimizes the graph using the available GPUs, thus engine is platform specific and not potable across different platforms.

So we recommend you to try on latest TensorRT version and if you still face this issue, please share ONNX model and trtexec command you followed to generate engine as inference script is already available in the git link. We would like to try reproducing the error from our end for better assistance.
Hope following link may help you,

Thank you.