I have untar’d TensorRT to /usr/local/cuda/TensorRT and added the following ENV variables in .bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/TensorRT:/usr/local/cuda/TensorRT/targets/x86_64-linux-gnu/lib/
export CUDNN_INSTALL_DIR=/usr/local/cuda/lib64
Having done so, the sample MNIST builds and runs successfully,
When I try to build another package (pointpillars, autoware) it fails.
I tried creating a dummy cmake project with some extracts from pointpillars to try to understand which part fails:
find_library(NVINFER NAMES libnvinfer.so)
find_library(NVPARSERS NAMES nvparsers)
find_library(NVONNXPARSERS NAMES nvonnxparser)
if(NVINFER)
message("TensorRT is available!")
message("NVINFER: ${NVINFER}")
message("NVPARSERS: ${NVPARSERS}")
message("NVONNXPARSERS: ${NVONNXPARSERS}")
set(TRT_AVAIL ON)
else()
message("TensorRT is NOT Available")
set(TRT_AVAIL OFF)
endif()
This fails to find libnvinfer.so - which I know is located in /usr/local/cuda/TensorRT/targets/x86_64-linux-gnu/lib/
So my question is: What is the best way to let CMake know how to find TensorRT ?
I’m on Ubuntu server 16.04, CUDA 10.0 due to other advice I have received in this forum from Nvidia, I have installed cuda from the .run and CuDNN from the .tgz