PyTorch for Jetson

Hi @Andrey1984, see this patch to apply to cpp_extension.py - https://gist.github.com/dusty-nv/ce51796085178e1f38e3c6a1663a93a1#file-pytorch-1-11-jetpack-5-0-patch

it got through but not for long after the patch

CLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
FAILED: /home/nvidia/torchvision/build/temp.linux-aarch64-3.8/home/nvidia/torchvision/torchvision/csrc/ops/quantized/cpu/qnms_kernel.o 
c++ -MMD -MF /home/nvidia/torchvision/build/temp.linux-aarch64-3.8/home/nvidia/torchvision/torchvision/csrc/ops/quantized/cpu/qnms_kernel.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/nvidia/torchvision/torchvision/csrc -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/TH -I/home/nvidia/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.4/include -I/usr/include/python3.8 -c -c /home/nvidia/torchvision/torchvision/csrc/ops/quantized/cpu/qnms_kernel.cpp -o /home/nvidia/torchvision/build/temp.linux-aarch64-3.8/home/nvidia/torchvision/torchvision/csrc/ops/quantized/cpu/qnms_kernel.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
/home/nvidia/torchvision/torchvision/csrc/ops/quantized/cpu/qnms_kernel.cpp:2:10: fatal error: ATen/native/quantized/affine_quantizer.h: No such file or directory
    2 | #include <ATen/native/quantized/affine_quantizer.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

Because torch core renamed affine_quantizer.h into AffineQuantizer.h.
[from github issues]
so symlinking resolves it

Using /usr/local/lib/python3.8/dist-packages
Finished processing dependencies for torchvision==0.13.1

outputs

0Genef
@dusty_nv
Done. Thanks

As per my build instructions, I normally build with QNNPACK disabled (which I believes disabled quantization)

That looks cool though! What model is that?

it is omni3d & Cube r-cnn https://garrickbrazil.com/omni3d/

root@nvidia-desktop:/home/nvidia/Test/torchvision# export BUILD_VERSION=0.10.0
root@nvidia-desktop:/home/nvidia/Test/torchvision# python3 setup.py install --user
Building wheel torchvision-0.10.0
PNG found: False
Running build on conda-build: False
Running build on conda: False
JPEG found: True
Building torchvision with JPEG image support
NVJPEG found: False
FFmpeg found: True
ffmpeg include path: [‘/usr/include’, ‘/usr/include/aarch64-linux-gnu’]
ffmpeg library_dir: [‘/usr/lib’, ‘/usr/lib/aarch64-linux-gnu’]
running install
running bdist_egg
running egg_info
writing torchvision.egg-info/PKG-INFO
writing dependency_links to torchvision.egg-info/dependency_links.txt
writing requirements to torchvision.egg-info/requires.txt
writing top-level names to torchvision.egg-info/top_level.txt
reading manifest file ‘torchvision.egg-info/SOURCES.txt’
reading manifest template ‘MANIFEST.in’
warning: no previously-included files matching ‘pycache’ found under directory ‘
warning: no previously-included files matching '
.py[co]’ found under directory ‘*’
writing manifest file ‘torchvision.egg-info/SOURCES.txt’
installing library code to build/bdist.linux-aarch64/egg
running install_lib
running build_py
copying torchvision/version.py → build/lib.linux-aarch64-3.6/torchvision
running build_ext
building ‘torchvision._C’ extension
Emitting ninja build file /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/build.ninja…
Compiling objects…
Allowing ninja to set a default number of workers… (overridable by setting the environment variable MAX_JOBS=N)
[1/35] c++ -MMD -MF /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_align_kernel.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/nvidia/Test/torchvision/torchvision/csrc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_align_kernel.cpp -o /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_align_kernel.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
FAILED: /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_align_kernel.o
c++ -MMD -MF /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_align_kernel.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/nvidia/Test/torchvision/torchvision/csrc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_align_kernel.cpp -o /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_align_kernel.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
c++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
[2/35] c++ -MMD -MF /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/nms_kernel.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/nvidia/Test/torchvision/torchvision/csrc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/nms_kernel.cpp -o /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/nms_kernel.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
[3/35] c++ -MMD -MF /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/roi_align_kernel.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/nvidia/Test/torchvision/torchvision/csrc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/roi_align_kernel.cpp -o /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/roi_align_kernel.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
[4/35] c++ -MMD -MF /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/deform_conv2d_kernel.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/nvidia/Test/torchvision/torchvision/csrc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/deform_conv2d_kernel.cpp -o /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/deform_conv2d_kernel.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
[5/35] c++ -MMD -MF /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_pool_kernel.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/nvidia/Test/torchvision/torchvision/csrc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_pool_kernel.cpp -o /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autocast/ps_roi_pool_kernel.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
[6/35] c++ -MMD -MF /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autograd/deform_conv2d_kernel.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/nvidia/Test/torchvision/torchvision/csrc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c -c /home/nvidia/Test/torchvision/torchvision/csrc/ops/autograd/deform_conv2d_kernel.cpp -o /home/nvidia/Test/torchvision/build/temp.linux-aarch64-3.6/home/nvidia/Test/torchvision/torchvision/csrc/ops/autograd/deform_conv2d_kernel.o -DTORCH_API_INCLUDE_EXTENSION_H ‘-DPYBIND11_COMPILER_TYPE=“_gcc”’ ‘-DPYBIND11_STDLIB=“_libstdcpp”’ ‘-DPYBIND11_BUILD_ABI=“_cxxabi1011”’ -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/autograd.h:4:0,
from /home/nvidia/Test/torchvision/torchvision/csrc/ops/autograd/deform_conv2d_kernel.cpp:3:
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode::apply(torch::autograd::variable_list&&) [with T = vision::ops::{anonymous}::DeformConv2dBackwardFunction; torch::autograd::variable_list = std::vectorat::Tensor]’:
/home/nvidia/Test/torchvision/torchvision/csrc/ops/autograd/deform_conv2d_kernel.cpp:266:1: required from here
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/autograd/custom_function.h:311:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (num_outputs > num_forward_inputs) {
^~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/autograd/custom_function.h:323:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (num_outputs != num_forward_inputs) {
^~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/autograd/custom_function.h:334:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int i = 0; i < num_outputs; ++i) {
^~~~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode::apply(torch::autograd::variable_list&&) [with T = vision::ops::{anonymous}::DeformConv2dFunction; torch::autograd::variable_list = std::vectorat::Tensor]’:
/home/nvidia/Test/torchvision/torchvision/csrc/ops/autograd/deform_conv2d_kernel.cpp:266:1: required from here
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/autograd/custom_function.h:311:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (num_outputs > num_forward_inputs) {
^~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/autograd/custom_function.h:323:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (num_outputs != num_forward_inputs) {
^~~~~~~~~
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/autograd/custom_function.h:334:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int i = 0; i < num_outputs; ++i) {
^~~~~~~~~~~
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1672, in _run_ninja_build
env=env)
File “/usr/lib/python3.6/subprocess.py”, line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command ‘[‘ninja’, ‘-v’]’ returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “setup.py”, line 488, in
‘clean’: clean,
File “/usr/lib/python3/dist-packages/setuptools/init.py”, line 129, in setup
return distutils.core.setup(**attrs)
File “/usr/lib/python3.6/distutils/core.py”, line 148, in setup
dist.run_commands()
File “/usr/lib/python3.6/distutils/dist.py”, line 955, in run_commands
self.run_command(cmd)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/lib/python3/dist-packages/setuptools/command/install.py”, line 67, in run
self.do_egg_install()
File “/usr/lib/python3/dist-packages/setuptools/command/install.py”, line 109, in do_egg_install
self.run_command(‘bdist_egg’)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py”, line 172, in run
cmd = self.call_command(‘install_lib’, warn_dir=0)
File “/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py”, line 158, in call_command
self.run_command(cmdname)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/lib/python3/dist-packages/setuptools/command/install_lib.py”, line 24, in run
self.build()
File “/usr/lib/python3.6/distutils/command/install_lib.py”, line 109, in build
self.run_command(‘build_ext’)
File “/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File “/usr/lib/python3/dist-packages/setuptools/command/build_ext.py”, line 78, in run
_build_ext.run(self)
File “/usr/local/lib/python3.6/dist-packages/Cython/Distutils/old_build_ext.py”, line 186, in run
_build_ext.build_ext.run(self)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 339, in run
self.build_extensions()
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 709, in build_extensions
build_ext.build_extensions(self)
File “/usr/local/lib/python3.6/dist-packages/Cython/Distutils/old_build_ext.py”, line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 448, in build_extensions
self._build_extensions_serial()
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 473, in _build_extensions_serial
self.build_extension(ext)
File “/usr/lib/python3/dist-packages/setuptools/command/build_ext.py”, line 199, in build_extension
_build_ext.build_extension(self, ext)
File “/usr/lib/python3.6/distutils/command/build_ext.py”, line 533, in build_extension
depends=ext.depends)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 539, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1360, in _write_ninja_file_and_compile_objects
error_prefix=‘Error compiling objects for extension’)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py”, line 1682, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

Having these issue when installing torchvision v0.10.0 with pytorch v1.9.0 any guide will be appreaciated

Typically the Killed message means that the board ran out of memory during compilation. Can you try mounting swap, closing any unneeded applications, and if needed disabling the desktop GUI? You can find how to do those things here:

ciao, qualcuno potrebbe aiutarmi?
seguendo questa guida ho installato torch e l’installazione sembra andata a buon fin ( su python 3.6 la versione 1.7.0)
però quando vado a installare torchvision riscontro dei problemi. e mi dice (module no found) ovvero non completa l’installazione .
qualcuno potrebbe aiutarmi

it works, soloved my problem… thx

Hello, after install pytorch 1.12.0, i found distributed.is_available() == False, when i development in Jetson Orin AGX.

Then i change torch to 1.11.0, it make an error about mpi missing, after apt it works.

First i think it may occur by linopenmpi-dev, so i try 1.12.0 again, then i failed, so i guess maybe some build options is not defined correctly(may be), if not, can you give me some advice, i really need lasted version to develop MONAI apps.

Hi @1017208039, PyTorch 1.11 was the last version that I personally built, and those were built with USE_DISTRIBUTED enabled (with OpenMPI). The newer official PyTorch wheels were built by another team and those don’t have distributed enabled since it’s not an all that common use-case for embedded Jetson systems. If you require PyTorch 1.12 with MPI support, I suggest that you re-build PyTorch from source after installing libopenmpi-dev on your system.

Hi @kekkaammy98, I take it that you are having problems importing torchvision - which steps did you follow to install it? Did you encounter any errors when you compiled it?

Note that the l4t-pytorch containers already have PyTorch and torchvision pre-installed in them if you continue having problems.

Hi @dusty_nv, I notice that the patch for Pytorch 1.11 on Jetpack 5.0 doesn’t include the fix for issue #8103. Can you please tell me why?

It’s because I’m working on building Pytorch 1.11/1.12 (with Magma lib) for AGX Xavier & Xavier NX on Jetpack 5.0.2 and also TX2 on Jetpack 4.6.2 and I’m not sure if I have to include that fix or not.

Thank you.

Hi @huyhung411991, the JetPack 5.x wheels are for Xavier & Orin, and I’ve only seen that issue on TX1/TX2/Nano. So to get the full perf on Xavier & Orin, I did not include it. However if you do encounter the issue on JetPack 5, please let me know - thanks.

Hi all,

I am trying to install Pytorch 1.12 on my Jetson Nano but I am running into problems.
After running pip install on the wheel file and trying to import torch in an python shell I get the following error:

Does anybody have a clue why this happens?

Python version: 3.8.15
Cuda version: Build cuda_10.2_r440.TC440_70.29663091_0

Any suggestions will be greatly appreciated!

Hi @wytzepj, Jetson Nano runs JetPack 4.x + CUDA 10, and that PyTorch 1.12 wheel is for JetPack 5.x and CUDA 11. There is a PyTorch 1.10 wheel for JetPack 4.x though:

PyTorch v1.10.0

Thanks for your reply!
Aah that makes sense, however I would like to use python3.8 (requirement for YoloV5) is that not possible with JetPack 4.x?

You would need to rebuild PyTorch on JetPack 4.x for Python 3.8 - the normal instructions for building PyTorch on Jetson are included at the top of this thread. However, here are some other posts on it for Python 3.8:

Hi I have installed PyTorch 1.8 on the Jetson Nano and it works fine. But with Torchvison v0.9.0 I have the problem torchvision `“syntaxerror future feature annotations is not defined”. This is because Python 3.6.9 which I have installed does not support this feature. What do I have to do to get it working ?

Hi @xphenonx, can you try running this: pip3 install 'pillow<9'

1 Like

Is there a compatibility matrix for versions before JP 4.6.1? This link only has till 4.6.1

I need it for JP 4.6 and 4.5.1.