Tensorrt not working in RTX 3080

Description

Trying to bring up tensorrt using docker for 3080, working fine for older gpus with 7.x.x trt version and 11.0 cuda but when tried the same for 3080 getting library not found. Trying to figure out the correct Cuda and trt version for this gpu.

Environment

TensorRT Version: 8.2.0.6
GPU Type: RTX 3080
Nvidia Driver Version: 470.63.01
CUDA Version: 11.4
CUDNN Version: 8.0.2
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 2.5.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Container

Steps To Reproduce

Built on the dockerfile provided in the tensorrt repo

Docker error logs:

workers_gpu | 2021-10-11 02:49:00.148285: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64:/src/TensorRT/build/out:/usr/lib/x86_64-linux-gnu
workers_gpu | 2021-10-11 02:49:00.148321: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.

Hi @logesh.krishnan,

Looks like dependencies are not configured properly. Please follow steps from installation guide. Also please check support matrix doc and make sure you’ve dependencies installed correctly.
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html

Thank you.

okay thank you

1 Like