Met (292) - Cudnn Error in enqueue: 3 When try to convert ONNX to TRT in INT8


A clear and concise description of the bug or issue.


TensorRT Version:
GPU Type: RTX 3070
Nvidia Driver Version: 470.63.01
CUDA Version: 11.3
CUDNN Version:
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.7
Baremetal or Container (if container which image + tag): Baremetal

Relevant Files

ONNX Model

Calib File

trtexec log

Steps To Reproduce

  1. Convert the trained model to ONNX format
  2. Generate Calib file
  3. Try to convert to TensorRT Engin in INT8, with the command below:
./trtexec --onnx=my_model.onnx  --optShapes=input0:1x1x1024x500 --int8 --calib=calibration.cache  --workspace=5000 --verbose

and get the core dumped as below:

[11/01/2021-22:35:33] [E] [TRT] (292) - Cudnn Error in enqueue: 3 (CUDNN_STATUS_BAD_PARAM)
terminate called after throwing an instance of 'nvinfer1::plugin::CudnnError'
  what():  std::exception
Aborted (core dumped)

The detaied log please refer to he log file

Please help me on this.


Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.


Thank you for your quick response. I checked the link you provided, and I didn’t think it is related to my issue, for the reason below:

  1. As I only used InstanceNorm, which is already supported;
  2. Converting the model to TRT engine in FP16 is successful. This error is only raised in converting INT8.

I checked the code, it asserted in

        CUDNNASSERT(cudnnSetTensor4dDescriptor(mXDescriptor, CUDNN_TENSOR_NCHW, cudnn_dtype, 1, n * c, h, w));

I don’t quite understand. Please give me a hand on this. Thank you.


@NVES @spolisetty , Any update to this? I opened an issue in github,
Please help me on this. I doooo need to convert a model to INT8 with InstanceNorm.
Thank you .


This error usually occurs from inappropriate CUDA driver.

Please make sure dependencies are installed correctly.