I’m trying to follow the TensorRT quick start guide: Quick Start Guide :: NVIDIA Deep Learning TensorRT Documentation
I installed everything using pip, and the small python test code runs fine.
Then they say to use a tool called
trtexec to create a
.trt file from an
onnx file, and this tool is supposed to come with the TensorRT installation. I didn’t install it myself though, pip installed everything for me. Where then do I get this tool? I tried looking around in the python install directory (
.local/lib/python3.8/site-packages/tensorrt) but it’s not there.
I then tried compiling it myself from a new download of TensorRT, version 188.8.131.52 from https://developer.nvidia.com/nvidia-tensorrt-download, by running make inside the
samples/trtexec folder. This results in the following errors:
../Makefile.config:11: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change. ../Makefile.config:16: CUDNN_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change. ../Makefile.config:29: TRT_LIB_DIR is not specified, searching ../../lib, ../../lib, ../lib by default, use TRT_LIB_DIR=<trt_lib_directory> to change. if [ ! -d ../../bin/chobj/../common ]; then mkdir -p ../../bin/dchobj/../common; fi; : Compiling: trtexec.cpp In file included from ../common/buffers.h:20, from trtexec.cpp:35: ../common/common.h:38:10: fatal error: cuda_runtime_api.h: No such file or directory 38 | #include <cuda_runtime_api.h> | ^~~~~~~~~~~~~~~~~~~~ compilation terminated. make: *** [../Makefile.config:355: ../../bin/dchobj/trtexec.o] Error 1
This makes sense, I didn’t set those environment variables. What should I set them to? Or it it really necessary for me to install Cuda separately as well?