RTX 2080 TI Supported?

I am using the python library to build a model:

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network:
    ...
    network.add_input(...)
    etc.

This has worked fine for months on the 1080 TI card. I’ve recently added a RTX 2080 TI to my machine and it does not work. While building the model, I see this error:

[TensorRT] ERROR: cuda/cudaConvolutionLayer.cpp (238) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)

My configuration (using nvidia-docker):

  • OS: ubuntu 16.04
  • cuda: 10.0
  • cudnn: 7.5.0
  • tensorrt: 5.1.5
user@dd17d4e31b32:~$ dpkg -l | grep TensorRT
ii  libnvinfer-dev                                              5.1.5-1+cuda10.0                                      amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                          5.1.5-1+cuda10.0                                      all          TensorRT samples and documentation
ii  libnvinfer5                                                 5.1.5-1+cuda10.0                                      amd64        TensorRT runtime libraries
ii  python-libnvinfer                                           5.1.5-1+cuda10.0                                      amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                       5.1.5-1+cuda10.0                                      amd64        Python development package for TensorRT
ii  tensorrt                                                    5.1.5.0-1+cuda10.0                                    amd64        Meta package of TensorRT

user@dd17d4e31b32:~$ dpkg -l | grep cudnn
ii  libcudnn7                                                   7.5.0.56-1+cuda10.1                                   amd64        cuDNN runtime libraries
ii  libcudnn7-dev                                               7.5.0.56-1+cuda10.1                                   amd64        cuDNN development libraries and headers

Relevant part of Dockerfile:

# cuda/cudnn
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04

# Install TensorRT
RUN dpkg -i /debs/nv-tensorrt-repo-ubuntu1604-cuda10.0-trt5.1.5.0-ga-20190427_1-1_amd64.deb
RUN apt-key add /var/nv-tensorrt-repo-cuda10.0-trt5.1.5.0-ga-20190427/7fa2af80.pub
RUN apt-get update && apt-get -y install \
    libcudnn7=7.5.0.56-1+cuda10.0 \
    libcudnn7-dev=7.5.0.56-1+cuda10.0 \
    tensorrt=5.1.5.0-1+cuda10.0 \
    python-libnvinfer-dev=5.1.5-1+cuda10.0 \
    python-libnvinfer=5.1.5-1+cuda10.0 \
    libnvinfer5=5.1.5-1+cuda10.0 \
    libnvinfer-dev=5.1.5-1+cuda10.0

I should mention that other cuda-based libraries will run fine on the 2080 TI (pytorch for example).

Did you were able to resolve that issue? I am running exactly in the same issue with the exact same setup, except the Ubuntu version. I am using Ubuntu 18

I’m afraid not. I am just using an older card until this is fixed or somebody can determine a workaround.

had the same issue before, hope this helps!

I’m using ubuntu 16.04 x64 with RTX 2080 TI gc,
I installed the following:

CUDA: cuda toolkit 10.1 (deb file)

for the cudnn of cuda 10.1, I downloaded the ff in Download cuDNN v7.6.2 (July 22, 2019), for CUDA 10.1

a. cuDNN Runtime Library for Ubuntu16.04 (Deb)
b. cuDNN Developer Library for Ubuntu16.04 (Deb)
c. cuDNN Library for Linux

-> sudo dpkg -i a. & b.

-> Unzip the cuDNN package c.
after unzip*
sudo cp cuda/include/cudnn.h /usr/local/cuda-10.1/include sudo cp cuda/lib64/libcudnn* /usr/local/cuda-10.1/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda-10.1/lib64/libcudnn*

*just edit the cuda-10.1 if you are using the lower version of cuda toolkit

& just make sure you have downloaded and installed the same version of cuda and cudnn…

Hi, have you solved this issue, I have the same issue:
[TensorRT] ERROR: cuda/cudaConvolutionLayer.cpp (238) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)

My configuration:

  • OS: ubuntu 16.04
  • cuda: 9.0
  • cudnn: 7.5.0
  • tensorrt: 5.1.5
  • GPU: 2080Ti

I don’t know how to solved it! Thanks.

Hi,
Yes, I solved this by installing the compatible version of Cudnn to Cuda driver

example:

if you are using cuda 9, ubuntu 16.04, then install the compatible version of Cuddn,

goto: https://developer.nvidia.com/rdp/cudnn-download,

and download the following:

Download cuDNN v7.6.3 (August 23, 2019), for CUDA 9.0
“Library for Windows, Mac, Linux, Ubuntu(x86_64 architecture)”
1. cuDNN Runtime Library for Ubuntu16.04 (Deb)
2. cuDNN Developer Library for Ubuntu16.04 (Deb)

-> sudo dpkg -i 1 & 2

3. cuDNN Library for Linux

->  unzip and copy the files to your cuda driver, in your case cuda-9.0, * note check the name of your 
    cuda driver folder if it is cuda-9.0

  $ sudo cp cuda/include/cudnn.h /usr/local/cuda-9.0/include
  $ sudo cp cuda/lib64/libcudnn* /usr/local/cuda-9.0/lib64
  $ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda-9.0/lib64/libcudnn*

restart!

Hi, thanks you for helping me. I have tried but it did not solve my problem:
‘’’
[TensorRT] ERROR: cuda/cudaConvolutionLayer.cpp (238) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)
‘’’

My configuration:

  • OS: ubuntu 16.04
  • cuda: 9.0
  • cudnn: 7.6.3.30
  • tensorrt: 5.1.5
  • GPU: 2080Ti

dpkg -l | grep TensorRT

ii graphsurgeon-tf 5.1.5-1+cuda9.0 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 5.1.5-1+cuda9.0 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.1.5-1+cuda9.0 all TensorRT samples and documentation
ii libnvinfer5 5.1.5-1+cuda9.0 amd64 TensorRT runtime libraries
ii python-libnvinfer 5.1.5-1+cuda9.0 amd64 Python bindings for TensorRT
ii python-libnvinfer-dev 5.1.5-1+cuda9.0 amd64 Python development package for TensorRT
ii tensorrt 5.1.5.0-1+cuda9.0 amd64 Meta package of TensorRT
ii uff-converter-tf 5.1.5-1+cuda9.0 amd64 UFF converter for TensorRT package

dpkg -l | grep cudnn

ii libcudnn7 7.6.3.30-1+cuda9.0 amd64 cuDNN runtime libraries
ii libcudnn7-dev 7.6.3.30-1+cuda9.0 amd64 cuDNN development libraries and headers

I want to know what causes the issue, is the version of cudnn incompatible with cuda9.0?
Below is the TensorRT Release 5.1.5 (Desktop users) Documentation.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-515/tensorrt-support-matrix/index.html
The nvidia recommend that the Supported cuDNN versions is 7.5.0 with tensorrt5.1.5, I have tried it but did not work and had the same issue.
Look forward to your reply!

Hi…

I am also using cuda-9 before, however, when I installed the cudnn, I upgraded it to 10.1,
try to upgrade it to 10.1,

make sure to use deb package when installing it…

#/usr/local$ ls
bin cuda-10.1 doc games lib sbin src
cuda cuda-9.0 etc include man share

#dpkg -l | grep TensorRT

ii graphsurgeon-tf 5.1.5-1+cuda10.1 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 5.1.5-1+cuda10.1 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.1.5-1+cuda10.1 all TensorRT samples and documentation
ii libnvinfer5 5.1.5-1+cuda10.1 amd64 TensorRT runtime libraries
ii python3-libnvinfer 5.1.5-1+cuda10.1 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 5.1.5-1+cuda10.1 amd64 Python 3 development package for TensorRT
ii tensorrt 5.1.5.0-1+cuda10.1 amd64 Meta package of TensorRT
ii uff-converter-tf 5.1.5-1+cuda10.1 amd64 UFF converter for TensorRT package

dpkg -l | grep cudnn

ii libcudnn7 7.6.2.24-1+cuda10.1 amd64 cuDNN runtime libraries
ii libcudnn7-dev 7.6.2.24-1+cuda10.1 amd64 cuDNN development libraries and headers

Hi,

Thanks, I upgraded cuda to 10.1, and reinstall the cudnn and tensorrt, now it works. So I think cuda 9.0 is incompatible with 2080Ti gc. Thank you very much.

This did seem to be related to cudnn version. When I enabled full cuda debugging I realized the cudnn version being reported was 7.5.0 even though 7.6.3 was installed. I believe this may have been due to some shadowing from pytorch, because when I upgraded torch to 1.2.0 the problem has been fixed.