Description
I can’t install torch2trt plugins on a Docker image “l4t-pytorch:r32.6.1-pth1.8-py3”. I’m trying for days but I get this error :
running install
running bdist_egg
running egg_info
writing torch2trt.egg-info/PKG-INFO
writing dependency_links to torch2trt.egg-info/dependency_links.txt
writing top-level names to torch2trt.egg-info/top_level.txt
reading manifest file ‘torch2trt.egg-info/SOURCES.txt’
adding license file ‘LICENSE.md’
writing manifest file ‘torch2trt.egg-info/SOURCES.txt’
installing library code to build/bdist.linux-aarch64/egg
running install_lib
running build_py
running build_ext
building ‘plugins’ extension
Emitting ninja build file /torch2trt/build/temp.linux-aarch64-3.6/build.ninja…
Compiling objects…
Allowing ninja to set a default number of workers… (overridable by setting the environment variable MAX_JOBS=N)
[1/1] c++ -MMD -MF /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/aarch64-linux-gnu -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /torch2trt/torch2trt/plugins/plugins.cpp -o /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=plugins -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
FAILED: /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o
c++ -MMD -MF /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/aarch64-linux-gnu -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /torch2trt/torch2trt/plugins/plugins.cpp -o /torch2trt/build/temp.linux-aarch64-3.6/torch2trt/plugins/plugins.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=plugins -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:140:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from /torch2trt/torch2trt/plugins/plugins.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/cuda/CUDAEvent.h:3:0,
from /torch2trt/torch2trt/plugins/interpolate.cpp:8,
from /torch2trt/torch2trt/plugins/plugins.cpp:2:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/cuda/ATenCUDAGeneral.h:3:10: fatal error: cuda.h: No such file or directory
include <cuda.h>
^~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1673, in _run_ninja_build
env=env)
File “/usr/lib/python3.6/subprocess.py”, line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command ‘[‘ninja’, ‘-v’]’ returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “setup.py”, line 59, in
cmdclass={‘build_ext’: BuildExtension}
File “/usr/local/lib/python3.6/dist-packages/setuptools/init.py”, line 153, in setup
return distutils.core.setup(**attrs)
File “/usr/lib/python3.6/distutils/core.py”, line 148, in setup
dist.run_commands()
File “/usr/lib/python3.6/distutils/dist.py”, line 955, in run_commands
self.run_command(cmd)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/install.py”, line 67, in run
self.do_egg_install()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/install.py”, line 109, in do_egg_install
self.run_command(‘bdist_egg’)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/bdist_egg.py”, line 164, in run
cmd = self.call_command(‘install_lib’, warn_dir=0)
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/bdist_egg.py”, line 150, in call_command
self.run_command(cmdname)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/install_lib.py”, line 11, in run
self.build()
File “/usr/lib/python3.6/distutils/command/install_lib.py”, line 109, in build
self.run_command(‘build_ext’)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/build_ext.py”, line 79, in run
_build_ext.run(self)
File “/usr/local/lib/python3.6/dist-packages/Cython/Distutils/old_build_ext.py”, line 186, in run
_build_ext.build_ext.run(self)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 339, in run
self.build_extensions()
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 708, in build_extensions
build_ext.build_extensions(self)
File “/usr/local/lib/python3.6/dist-packages/Cython/Distutils/old_build_ext.py”, line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 448, in build_extensions
self._build_extensions_serial()
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 473, in _build_extensions_serial
self.build_extension(ext)
File “/usr/local/lib/python3.6/dist-packages/setuptools/command/build_ext.py”, line 202, in build_extension
_build_ext.build_extension(self, ext)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 533, in build_extension
depends=ext.depends)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 538, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1359, in _write_ninja_file_and_compile_objects
error_prefix=‘Error compiling objects for extension’)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1683, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
I try other images like “l4t-pytorch:r32.7.1-pth1.9-py3”, but nothing.
I need torch2trt for use trt_pose.
Environment
TensorRT Version: 4.6.2
GPU Type: Jetson Xavier
Nvidia Driver Version:
CUDA Version: CUDA 10.2
CUDNN Version: cuDNN 8.2.1
Operating System + Version:
TensorFlow Version" : TensorRT 8.0.1
PyTorch Version (if applicable): 1.9.0 // 1.8.0
Baremetal or Container: l4t-pytorch:r32.6.1-pth1.8-py3 / l4t-pytorch:r32.7.1-pth1.9-py3
Steps To Reproduce
sudo docker pull nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.9-py3
sudo docker run -it --runtime nvidia --network host --entrypoint sh nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.9-py3
git clone GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter
cd torch2trt
sudo python3 setup.py install --plugins