Description
What is the proper way to upgrade tensorRT version in deepstream v6.0.1-triton docker?
Current version of TensorRT: 8.0.1.6
CUDA Version: 11.4
nvcc version: 11.4
I tried to upgrade tensorrt version with
pip install tensorrt
It upgraded to latest version, but I face problems with CUDA driver version mismatch
for import pycuda.autoinit
#################
Similar issues in Deepstream docker v6.1.1-triton
I started a container and checked nvcc --version
It shows version 11.7
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0
Then when I checked nvidia-smi
, it shows CUDA version 10.2
NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2
So I checked LD_LIBRARY_PATH
, where it refers to path /usr/local/cuda/compat/lib
, whereas there was only /usr/local/cuda/compat/lib.real
.
Hence I created a soft link as lib
from lib.real
.
Then nvidia-smi
gives me NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 11.7
Then I installed pycuda with pip install pycuda
when I import pycuda autoinit, it gives me error as below.
>>> import pycuda.autoinit
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/pycuda/autoinit.py", line 5, in <module>
cuda.init()
pycuda._driver.LogicError: cuInit failed: system has unsupported display driver / cuda driver combination