Description
Trying to build a simple cmake file to build my first TensorRT project. My CMakeLists.txt
file has these lines:
...
FIND_PACKAGE (CUDA REQUIRED)
INCLUDE_DIRECTORIES (${CUDA_INCLUDE_DIRS})
MESSAGE(STATUS "CUDA_INCLUDE_DIRS=${CUDA_INCLUDE_DIRS}")
MESSAGE(STATUS "CUDA_LIBRARIES=${CUDA_LIBRARIES}")
...
TARGET_LINK_LIBRARIES (test_003 Threads::Threads ${CUDA_LIBRARIES} ${OpenCV_LIBS})
It is finding CUDA, since cmake logs these messages:
-- Found CUDA: /usr/local/cuda (found version "10.2")
-- CUDA_INCLUDE_DIRS=/usr/local/cuda/include
-- CUDA_LIBRARIES=/usr/local/cuda/lib64/libcudart_static.a;-lpthread;dl;/usr/lib/aarch64-linux-gnu/librt.so
Compile works fine. But when it goes to link, I get this error:
main.cpp:(.text.startup+0x3d0): undefined reference to `createInferRuntime_INTERNAL'
collect2: error: ld returned 1 exit status
What libraries should we be linking against? Or, can someone point me to a simple CUDA/CMake file I can read?
Environment
TensorRT Version: 7.1.3.0-1+cuda10.2
GPU Type: Xavier NX
Nvidia Driver Version: ?
CUDA Version: 10.2
CUDNN Version: 8.0.0.180-1+cuda10.2
Operating System + Version: Ubuntu 18.04.5
Python Version (if applicable):
TensorFlow Version (if applicable): 7.1.3-1+cuda10.2
PyTorch Version (if applicable): ?
Baremetal or Container (if container which image + tag): jetson-nx-jp441-sd-card-image.img
CMakeLists.txt (798 Bytes)