Description
I have tried to run TRT on Windows with c++ to accelerate the inference speed. My environment is Windows 10 family version with RTX2080ti. I have tried many combination of cuda, cudnn and trt (e.g. trt 5.1.5, cuda 10.0, cudnn7.5 or trt 7.0, cuda 10.0 and cudnn7.6).
With trt5.1.5, when i run the sample sampleMNIST it gives, it will throw an error said: Cudnn Error in nvinfer1::rt::cuda::CudnnConvolutionLayer::execute: 8 (CUDNN_STATUS_EXCUTION_FAILED).
When it run trt7.0.0.11, it will crash when it runs the builder->createBuilderConfig().
Anyone knows how to solve it?
Environment
TensorRT Version: 5.1.5
GPU Type: RTX2080ti
Nvidia Driver Version: 441.22
CUDA Version: 10.0
CUDNN Version: 7.5
Operating System + Version: Windows 10 Family
Python Version (if applicable): None
TensorFlow Version (if applicable): None
PyTorch Version (if applicable): None
Baremetal or Container (if container which image + tag): None
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered