How to run the TRT in Windows with c++


I have tried to run TRT on Windows with c++ to accelerate the inference speed. My environment is Windows 10 family version with RTX2080ti. I have tried many combination of cuda, cudnn and trt (e.g. trt 5.1.5, cuda 10.0, cudnn7.5 or trt 7.0, cuda 10.0 and cudnn7.6).

With trt5.1.5, when i run the sample sampleMNIST it gives, it will throw an error said: Cudnn Error in nvinfer1::rt::cuda::CudnnConvolutionLayer::execute: 8 (CUDNN_STATUS_EXCUTION_FAILED).

When it run trt7.0.0.11, it will crash when it runs the builder->createBuilderConfig().

Anyone knows how to solve it?


TensorRT Version: 5.1.5
GPU Type: RTX2080ti
Nvidia Driver Version: 441.22
CUDA Version: 10.0
CUDNN Version: 7.5
Operating System + Version: Windows 10 Family
Python Version (if applicable): None
TensorFlow Version (if applicable): None
PyTorch Version (if applicable): None
Baremetal or Container (if container which image + tag): None

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered


Can you try fresh installation using TRT 7.0, CUDA 10.2 and cuDNN 7.6.5?
Also please check if all the dlls that are loaded from system are 7.0 dlls


Tks for your reply, now i reinstall these stuff as u said.

However, after i compile the sampleMNIST with vs2019 and run .exe in cmd, it throw an error which is exactly same with this link:

Plz check it, tks a lot

I have a try with TRT with CUDA 10.2 and CUDNN 7.6.5 on local Windows 10, could not reproduce the issue mentioned.

Could you please check the system dependencies are installed correctly? Also, are you getting this issue only for this sample or for all the samples?

If possible, could you please share the verbose error log in case issue persist?