I’m trying to run setup.py from torch2trt folder for trt_pose model.
I’m on a Tesla V100 GPU azure cloud instance and I’m getting the error " fatal error: NvInfer.h: No such file or directory" when I try to run the setup.py file using command “python3 setup.py install --plugins”.
Can someone please help with what I am missing in the process.
Please note: I currently don’t have a jetson device and and have not installed jetpack jdk.
TensorRT Version : 8.5.3.1
GPU Type : Tesla V100
Nvidia Driver Version : 520.61.05
CUDA Version : 11.8
CUDNN Version : 8.9.0
Operating System + Version : Linux Ubuntu 18.04
Python Version (if applicable) : 3.6
Refer following link:
opened 07:22PM - 28 Nov 18 UTC
closed 04:48PM - 27 Apr 23 UTC
I follow the step "Compiling the Project" but get
~/jetson-inference/build$ ma… ke
[ 2%] Building CXX object CMakeFiles/jetson-inference.dir/segNet.cpp.o
In file included from /home/jacob/jetson-inference/segNet.h:27:0,
from /home/jacob/jetson-inference/segNet.cpp:23:
/home/jacob/jetson-inference/tensorNet.h:27:21: fatal error: NvInfer.h: No such file or directory
#include "NvInfer.h"
^
compilation terminated.
CMakeFiles/jetson-inference.dir/build.make:1409: recipe for target 'CMakeFiles/jetson-inference.dir/segNet.cpp.o' failed
make[2]: *** [CMakeFiles/jetson-inference.dir/segNet.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/jetson-inference.dir/all' failed
make[1]: *** [CMakeFiles/jetson-inference.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
I have tried to download TensorRT 5 again but does not solve the problem. I search the system and do not find NvInfer.h
Jacob
And verify if you define a proper tensorrt path.
BTW, JetPack is for Jetson series products, not V100.
Moving to TensorRT forum for better support, thanks.
I’ve installed tensorrt using the tar file installation method and I’ve added the /lib path to LD_LIBRARY_PATH environment variable before running the setup file
I’ve followed the exact steps as mentioned in the below link for tar installation method
Hi.
For those who are facing the problem, please try out the below link. That is what worked for me.
opened 12:53AM - 10 Sep 20 UTC
closed 09:03PM - 18 Jul 22 UTC
Dear all,
I manage to install TensorRT using a tar file by referring to the [… official NVIDIA site](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar) and try to install the torch2trt with plugins by referring to the [official torch2trt github](https://github.com/NVIDIA-AI-IOT/torch2trt).
I installed the torch2trt installation without plugins without any problems.
However, I fail to install the torch2trt with plugins. To be more specific, I'm having a problem with a Nvinfer.h.
I think the Nvinfer.h is not existed in my machine.
I installed below packages such as GPU driver, CUDA toolkit, cuDNN and TensorRT by referring to the [DL installation](https://github.com/vujadeyoon/DL-UbuntuMATE18.04LTS-Installation) and [TensorRT-Torch2TRT](https://github.com/vujadeyoon/TensorRT-Torch2TRT).
My machine environments are as follows:
- Operating System (OS): Ubuntu MATE 18.04.3 LTS (Bionic)
- Graphics Processing Unit (GPU): NVIDIA TITAN RTX, 4ea-
- GPU driver: Nvidia-440.100
- CUDA toolkit: 10.1 (default), 10.2
- cuDNN: cuDNN v7.6.5
- PyTorch: 1.3.0
- TensorRT: 7.0.0.11
- torch2trt: 0.1.0
The debug information is as below when installing the torch2trt with plugins. In other words I run the command, sudo python setup.py install --plugins.
```bash
running install
running bdist_egg
running egg_info
writing torch2trt.egg-info/PKG-INFO
writing dependency_links to torch2trt.egg-info/dependency_links.txt
writing top-level names to torch2trt.egg-info/top_level.txt
reading manifest file 'torch2trt.egg-info/SOURCES.txt'
writing manifest file 'torch2trt.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'plugins' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/aarch64-linux-gnu -I/home/vujadeyoon/.local/lib/python3.6/site-packages/torch/include -I/home/vujadeyoon/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/vujadeyoon/.local/lib/python3.6/site-packages/torch/include/TH -I/home/vujadeyoon/.local/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c torch2trt/plugins/interpolate.cpp -o build/temp.linux-x86_64-3.6/torch2trt/plugins/interpolate.o -DUSE_DEPRECATED_INTLIST -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=plugins -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
torch2trt/plugins/interpolate.cpp:6:10: fatal error: NvInfer.h: No such file or directory
#include <NvInfer.h>
^~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
```
My questions are below:
1. How to install the torch2trt with plugins.
2. Could not the Nvinfer.h be installed when installing TensorRT using a tar file?
3. Please give me any other advice. If you have succeeded in installing the torch2trt with plugins, please give your way for the installation.
Best regards,
Vujadeyoon