TensorRT Version: 7.2.3.4 GPU Type: / Nvidia Driver Version: 460.32.03 CUDA Version: 11.2 CUDNN Version: 8.1.1.33 Operating System + Version: Ubuntu 18.04 Python Version (if applicable): / TensorFlow Version (if applicable): / PyTorch Version (if applicable): / Baremetal or Container (if container which image + tag): Baremetal
Steps To Reproduce
cd /my/custom/path/to/tensorrt/samples/
make
outputs:
…/…/lib/libnvinfer.so: undefined reference to nvrtcDestroyProgram@libnvrtc.so.11.1' ../../lib/libmyelin.so: undefined reference to nvrtcAddNameExpression@libnvrtc.so.11.1’
…/…/lib/libnvinfer.so: undefined reference to nvrtcCreateProgram@libnvrtc.so.11.1' ../../lib/libnvinfer.so: undefined reference to nvrtcCompileProgram@libnvrtc.so.11.1’
…/…/lib/libnvinfer.so: undefined reference to nvrtcGetPTX@libnvrtc.so.11.1' ../../lib/libnvinfer.so: undefined reference to nvrtcVersion@libnvrtc.so.11.1’
…/…/lib/libnvinfer.so: undefined reference to nvrtcGetProgramLog@libnvrtc.so.11.1' ../../lib/libmyelin.so: undefined reference to nvrtcGetLoweredName@libnvrtc.so.11.1’
…/…/lib/libnvinfer.so: undefined reference to nvrtcGetErrorString@libnvrtc.so.11.1' ../../lib/libnvinfer.so: undefined reference to nvrtcGetProgramLogSize@libnvrtc.so.11.1’
…/…/lib/libnvinfer.so: undefined reference to `nvrtcGetPTXSize@libnvrtc.so.11.1’
ldd /my/custom/path/to/tensorrt/targets/x86_64-linux-gnu/lib/libnvinfer.so
linux-vdso.so.1 (0x00007ffff7ffa000)
libcudnn.so.8 => /cm/shared/apps/cudnn8.1-cuda11.2/8.1.1.33/lib64/libcudnn.so.8 (0x00007fffd2147000)
libmyelin.so.1 => /cm/shared/apps/tensorrt-cuda11.2/7.2.3.4/lib/libmyelin.so.1 (0x00007fffd18c7000)
libnvrtc.so.11.1 => not found
…
Could you please provide some suggestions to workaround this issue?
Hi SunilJB and thanks for your response.
As weird as it may sound, I am not ultimately interested in running TensorRT, but only in packaging it.
In other words, a working TensorRT NGC container wouldn’t solve the issue, because the container setup is different from the baremetal environment I am using/building. And I am not interested in extending the TensorRT container.
There is no TensorRT installation step to validate, since the problem I am raising happens while building TensorRT (with make) from the source code (i.e. before installation).
I am using CUDA 11.2 and cuDNN 8.1.1, that are supported both in the support matrix (Support Matrix :: NVIDIA Deep Learning TensorRT Documentation) and in the download page (leading to TensorRT-7.2.3.4.Ubuntu-18.04.x86_64-gnu.cuda-11.1.cudnn8.1.tar.gz).
It looks like TensorRT is simply looking for a CUDA 11.1 library, while CUDA 11.2 is installed…
Please, let me know how shall I proceed.
Thanks a lot.