When the trt-model was inference in C++，I got the error like this:
…/rtSafe/cuda/cudaConvolutionRunner.cpp (483) - Cudnn Error in executeConv: 3 (CUDNN_STATUS_BAD_PARAM)
Moreover, the error only happens when I use fp32 mode, it’s ok for int8 mode.
And I am sure the input is fp32, data type is right.
I convert the model from pb to onnx and then convert the onnx to trt model.
Can you give any solutions? Thanks.
TensorRT Version: TensorRT-18.104.22.168
GPU Type: 1050 Ti
Nvidia Driver Version: 440.82
CUDA Version: 10.2
CUDNN Version: 8
Operating System + Version: ubuntu 18.04
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered