Troubles install torch2trt

Description

I can’t install torch2trt plugins on a Docker image “l4t-pytorch:r32.6.1-pth1.8-py3”. I’m trying for days but I get this error :

running install
running bdist_egg
running egg_info
writing torch2trt.egg-info/PKG-INFO
writing dependency_links to torch2trt.egg-info/dependency_links.txt
writing top-level names to torch2trt.egg-info/top_level.txt
reading manifest file ‘torch2trt.egg-info/SOURCES.txt’
adding license file ‘LICENSE.md’
writing manifest file ‘torch2trt.egg-info/SOURCES.txt’
installing library code to build/bdist.linux-aarch64/egg
running install_lib
running build_py
running build_ext
building ‘plugins’ extension
Emitting ninja build file /torch2trt/build/temp.linux-aarch64-3.6/build.ninja…
Compiling objects…
Allowing ninja to set a default number of workers… (overridable by setting the environment variable MAX_JOBS=N)
[1/1] c++ -MMD -MF /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/aarch64-linux-gnu -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /torch2trt/torch2trt/plugins/plugins.cpp -o /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=plugins -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
FAILED: /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o
c++ -MMD -MF /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/aarch64-linux-gnu -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /torch2trt/torch2trt/plugins/plugins.cpp -o /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=plugins -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from /torch2trt/torch2trt/plugins/plugins.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)

In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/cuda/CUDAEvent.h:3:0,
from /torch2trt/torch2trt/plugins/interpolate.cpp:8,
from /torch2trt/torch2trt/plugins/plugins.cpp:2:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cuda/ATenCUDAGeneral.h:3:10: fatal error: cuda.h: No such file or directory
include <cuda.h>
^~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1673, in _run_ninja_build
env=env)
File “/usr/lib/python3.6/subprocess.py”, line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command ‘[‘ninja’, ‘-v’]’ returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “setup.py”, line 59, in
cmdclass={‘build_ext’: BuildExtension}
File “/usr/local/lib/python3.6/dist-packages/setuptools/init.py”, line 153, in setup
return distutils.core.setup(**attrs)
File “/usr/lib/python3.6/distutils/core.py”, line 148, in setup
dist.run_commands()
File “/usr/lib/python3.6/distutils/dist.py”, line 955, in run_commands
self.run_command(cmd)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/install.py”, line 67, in run
self.do_egg_install()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/install.py”, line 109, in do_egg_install
self.run_command(‘bdist_egg’)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/bdist_egg.py”, line 164, in run
cmd = self.call_command(‘install_lib’, warn_dir=0)
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/bdist_egg.py”, line 150, in call_command
self.run_command(cmdname)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/install_lib.py”, line 11, in run
self.build()
File “/usr/lib/python3.6/distutils/command/install_lib.py”, line 109, in build
self.run_command(‘build_ext’)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/build_ext.py”, line 79, in run
_build_ext.run(self)
File “/usr/local/lib/python3.6/dist-packages/Cython/Distutils/old_build_ext.py”, line 186, in run
_build_ext.build_ext.run(self)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 339, in run
self.build_extensions()
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 708, in build_extensions
build_ext.build_extensions(self)
File “/usr/local/lib/python3.6/dist-packages/Cython/Distutils/old_build_ext.py”, line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 448, in build_extensions
self._build_extensions_serial()
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 473, in _build_extensions_serial
self.build_extension(ext)
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/build_ext.py”, line 202, in build_extension
_build_ext.build_extension(self, ext)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 533, in build_extension
depends=ext.depends)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 538, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1359, in _write_ninja_file_and_compile_objects
error_prefix=‘Error compiling objects for extension’)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1683, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

I try other images like “l4t-pytorch:r32.7.1-pth1.9-py3”, but nothing.

I need torch2trt for use trt_pose.

Environment

TensorRT Version: 4.6.2
GPU Type: Jetson Xavier
Nvidia Driver Version:
CUDA Version: CUDA 10.2
CUDNN Version: cuDNN 8.2.1
Operating System + Version:
TensorFlow Version" : TensorRT 8.0.1
PyTorch Version (if applicable): 1.9.0 // 1.8.0
Baremetal or Container: l4t-pytorch:r32.6.1-pth1.8-py3 / l4t-pytorch:r32.7.1-pth1.9-py3

Steps To Reproduce

sudo docker pull nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.9-py3
sudo docker run -it --runtime nvidia --network host --entrypoint sh nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.9-py3
git clone GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter
cd torch2trt
sudo python3 setup.py install --plugins

I tried to install the plugins also with “cmake -B build . && cmake --build build --target install && ldconfig” and also with "“cmake -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.0 -B build . && cmake --build build --target install && ldconfig” but I get this error :

“perhaps remove parentheses?
assert(len(self) > 0, ‘Cannot create default flattener without input data.’)
– The C compiler identification is GNU 7.5.0
– The CXX compiler identification is GNU 7.5.0
– Check for working C compiler: /usr/bin/cc
– Check for working C compiler: /usr/bin/cc – works
– Detecting C compiler ABI info
– Detecting C compiler ABI info - done
– Detecting C compile features
– Detecting C compile features - done
– Check for working CXX compiler: /usr/bin/c++
– Check for working CXX compiler: /usr/bin/c++ – works
– Detecting CXX compiler ABI info
– Detecting CXX compiler ABI info - done
– Detecting CXX compile features
– Detecting CXX compile features - done
CMake Error at /usr/share/cmake-3.10/Modules/FindCUDA.cmake:682 (message):
Specify CUDA_TOOLKIT_ROOT_DIR
Call Stack (most recent call first):
CMakeLists.txt:8 (find_package)”

##STEPS TO REPRODUCE##

sudo docker pull nvcr.io/nvidia/l4t-pytorch:r32.6.1-pth1.8-py3
sudo docker run -it --runtime nvidia --network host --entrypoint sh nvcr.io/nvidia/l4t-pytorch:r32.6.1-pth1.8-py3
git clone GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter
cd torch2trt
sudo python3 setup.py install
cmake -B build . && cmake --build build --target install && ldconfig

I’m trying to refresh my Jetpack but I get some error.

“User is not authorized on NVIDIA developer server.”

I’m using a Linux Ubuntu machine version 20.04.5, and I need to flash a Jetpack 4.6.
The Jetpack 4.6 seems to be only supported on SDK Manager version 1.8.x and lower.
However, the SDK Manager version 1.8.x doesn’t allow me to login and I obtain the error:

Version 1.9.x and upper allows me to login but doesn’t support Jetpack 4.6.

Someone could help me?

I refresh my Jetson with the same version of JetPack ma but I still have the same error. If I try to install the plugins of torchtrt with “sudo python3 setup.py install” or “cmake -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 -B build . && cmake --build build --target install && ldconfig” I get the error explains previously.

Where am I wrong? In the same Jetson we are able to use TensorRT models in production but I can’t install the pluggings of torch2trt for use trt_pose.

@giulio.carcano, sorry for the delayed response. Do you still need help on this issue?
We are moving this post to Jetson Xavier forum to get better help.

Thank you.

No thanks.

I resolve installing trt_pose and torch2trt inside the Jetson, using as image “nvcr.io/nvidia/l4t-pytorch:r32.6.1-pth1.9-py3” and creating a .whl of the two libraries.

After that I can build trt_pose and torch2trt in my Docker Desktop with only CPU, using "pip3 install {whl_file}.

Now, if you have some advice for improve the recognition of man down from trt_pose, it is welcome!

Thanks for your response.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.