../rtSafe/cuda/cudaConvolutionRunner.cpp (483) - Cudnn Error in executeConv: 3 (CUDNN_STATUS_BAD_PARAM)

Description

When the trt-model was inference in C++,I got the error like this:
…/rtSafe/cuda/cudaConvolutionRunner.cpp (483) - Cudnn Error in executeConv: 3 (CUDNN_STATUS_BAD_PARAM)
FAILED_EXECUTION: std::exception

Moreover, the error only happens when I use fp32 mode, it’s ok for int8 mode.
And I am sure the input is fp32, data type is right.
I convert the model from pb to onnx and then convert the onnx to trt model.

Can you give any solutions? Thanks.

Environment

TensorRT Version: TensorRT-7.2.2.3
GPU Type: 1050 Ti
Nvidia Driver Version: 440.82
CUDA Version: 10.2
CUDNN Version: 8
Operating System + Version: ubuntu 18.04

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

what’s more, the version of TensorRT can’t changed because of the demand.And model can’t be offered on account of secret

detailed_logs.zip (104.4 KB)
There is the detailed logs.

Hi,

Based on the logs looks like you could successfully generate the TRT engine, please make sure, you’re handling the data type and following in your inference script correctly.

For your reference, please refer to the following samples.

Thank you.