Build Pytorch on Jetson Xavier NX fails when building caffe2

Hi,
I’m trying to build Pytorch 1.6.0 on an Jetson Xavier NX, following the steps in this guide:


but it fails with the following output:

python3 setup.py develop Building wheel torch-1.6.0a0+3e957d0 -- Building version 1.6.0a0+3e957d0 cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/uname/mypytorch/pytorch/torch -DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages -DNUMPY_INCLUDE_DIR=/usr/lib/python3/dist-packages/numpy/core/include -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/libpython3.6m.so.1.0 -DTORCH_BUILD_VERSION=1.6.0a0+3e957d0 -DUSE_NUMPY=True /home/uname/mypytorch/pytorch -- std::exception_ptr is supported. -- Turning off deprecation warning due to glog. -- Building using own protobuf under third_party per request. -- Use custom protobuf build. -- -- 3.11.4.0 -- Caffe2 protobuf include directory: <BUILD_INTERFACE:/home/uname/mypytorch/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
– Trying to find preferred BLAS backend of choice: MKL
– MKL_THREADING = OMP
– MKL_THREADING = OMP
CMake Warning at cmake/Dependencies.cmake:148 (message):
MKL could not be found. Defaulting to Eigen
Call Stack (most recent call first):
CMakeLists.txt:469 (include)

CMake Warning at cmake/Dependencies.cmake:172 (message):
Preferred BLAS (MKL) cannot be found, now searching for a general BLAS
library
Call Stack (most recent call first):
CMakeLists.txt:469 (include)

– MKL_THREADING = OMP
– Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - gomp - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_intel - mkl_intel_thread - mkl_core - gomp - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_gf_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf_lp64 - mkl_intel_thread - mkl_core - gomp - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_gf - mkl_intel_thread - mkl_core - gomp - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel - mkl_gnu_thread - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_intel - mkl_intel_thread - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_gf_lp64 - mkl_gnu_thread - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf_lp64 - mkl_intel_thread - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf - mkl_gnu_thread - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_gf - mkl_intel_thread - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel - mkl_gnu_thread - mkl_core - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_gf_lp64 - mkl_gnu_thread - mkl_core - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf_lp64 - mkl_intel_thread - mkl_core - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf - mkl_gnu_thread - mkl_core - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_gf - mkl_intel_thread - mkl_core - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel - mkl_sequential - mkl_core - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_gf_lp64 - mkl_sequential - mkl_core - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf - mkl_sequential - mkl_core - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_intel_lp64 - mkl_core - gomp - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel - mkl_core - gomp - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_gf_lp64 - mkl_core - gomp - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf - mkl_core - gomp - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_intel_lp64 - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_gf_lp64 - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf - mkl_core - iomp5 - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl_intel_lp64 - mkl_core - pthread - m - dl]
– Library mkl_intel_lp64: not found
– Checking for [mkl_intel - mkl_core - pthread - m - dl]
– Library mkl_intel: not found
– Checking for [mkl_gf_lp64 - mkl_core - pthread - m - dl]
– Library mkl_gf_lp64: not found
– Checking for [mkl_gf - mkl_core - pthread - m - dl]
– Library mkl_gf: not found
– Checking for [mkl - guide - pthread - m]
– Library mkl: not found
– MKL library not found
– Checking for [Accelerate]
– Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
– Checking for [vecLib]
– Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
– Checking for [openblas]
– Library openblas: /usr/lib/aarch64-linux-gnu/libopenblas.so
– Found a library with BLAS API (open).
– Brace yourself, we are building NNPACK
– NNPACK backend is neon
– Found PythonInterp: /usr/bin/python3 (found version “3.6.9”)
– git Version: v1.4.0-505be96a
– Version: 1.4.0
– Performing Test HAVE_STD_REGEX – success
– Performing Test HAVE_GNU_POSIX_REGEX – failed to compile
– Performing Test HAVE_POSIX_REGEX – success
– Performing Test HAVE_STEADY_CLOCK – success
CMake Warning at cmake/Dependencies.cmake:652 (message):
Turning USE_FAKELOWP off as it depends on USE_FBGEMM.
Call Stack (most recent call first):
CMakeLists.txt:469 (include)

– Found Numa (include: /usr/include, library: /usr/lib/aarch64-linux-gnu/libnuma.so)
– Using third party subdirectory Eigen.
– Found PythonInterp: /usr/bin/python3 (found suitable version “3.6.9”, minimum required is “3.0”)
– Using third_party/pybind11.
– pybind11 include dirs: /home/uname/mypytorch/pytorch/cmake/…/third_party/pybind11/include
– MPI support found
– MPI compile flags: -pthread
– MPI include path: /usr/lib/aarch64-linux-gnu/openmpi/include/openmpi/usr/lib/aarch64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/usr/lib/aarch64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/include/usr/lib/aarch64-linux-gnu/openmpi/include
– MPI LINK flags path: -pthread
– MPI libraries: /usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi_cxx.so/usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi.so
CMake Warning at cmake/Dependencies.cmake:950 (message):
OpenMPI found, but it is not built with CUDA support.
Call Stack (most recent call first):
CMakeLists.txt:469 (include)

– Adding OpenMP CXX_FLAGS: -fopenmp
– No OpenMP library needs to be linked against
– Found CUDA: /usr/local/cuda (found version “10.2”)
– Caffe2: CUDA detected: 10.2
– Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
– Caffe2: CUDA toolkit directory: /usr/local/cuda
– Caffe2: Header version is: 10.2
– Found cuDNN: v8.0.0 (include: /usr/include, library: /usr/lib/aarch64-linux-gnu/libcudnn.so)
– Autodetected CUDA architecture(s): 7.2
– Added CUDA NVCC flags for: -gencode;arch=compute_72,code=sm_72
– Could NOT find CUB (missing: CUB_INCLUDE_DIR)
– MPI include path: /usr/lib/aarch64-linux-gnu/openmpi/include/openmpi/usr/lib/aarch64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/usr/lib/aarch64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/include/usr/lib/aarch64-linux-gnu/openmpi/include
– MPI libraries: /usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi_cxx.so/usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi.so
– Found CUDA: /usr/local/cuda (found suitable version “10.2”, minimum required is “7.0”)
– CUDA detected: 10.2
– Found uv: 1.37.0 (found version “1.37.0”)

– ******** Summary ********
– CMake version : 3.10.2
– CMake command : /usr/bin/cmake
– System : Linux
– C++ compiler : /usr/bin/c++
– C++ compiler version : 7.5.0
– CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -Wnon-virtual-dtor
– Build type : Release
– Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
– CMAKE_PREFIX_PATH : /usr/lib/python3/dist-packages;/usr/local/cuda
– CMAKE_INSTALL_PREFIX : /home/uname/mypytorch/pytorch/torch
– CMAKE_MODULE_PATH : /home/uname/mypytorch/pytorch/cmake/Modules;/home/uname/mypytorch/pytorch/cmake/public/…/Modules_CUDA_fix

– ONNX version : 1.4.1
– ONNX NAMESPACE : onnx_torch
– ONNX_BUILD_TESTS :
– ONNX_BUILD_BENCHMARKS :
– ONNX_USE_LITE_PROTO :
– ONNXIFI_DUMMY_BACKEND :

– Protobuf compiler :
– Protobuf includes :
– Protobuf libraries :
– BUILD_ONNX_PYTHON :
CMake Error at cmake/public/utils.cmake:44 (get_target_property):
get_target_property() called with non-existent target “onnx”.
Call Stack (most recent call first):
cmake/Dependencies.cmake:1358 (caffe2_interface_library)
CMakeLists.txt:469 (include)

– Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
– Adding -DNDEBUG to compile flags
– MAGMA not found. Compiling without MAGMA support
– Could not find hardware support for NEON on this machine.
– No OMAP3 processor on this machine.
– No OMAP4 processor on this machine.
– asimd/Neon found with compiler flag : -D__NEON__
– Found a library with LAPACK API (open).
disabling ROCM because NOT USE_ROCM is set
– MIOpen not found. Compiling without MIOpen support
disabling MKLDNN because USE_MKLDNN is not set
– Version: 6.2.0
– Build type: Release
– CXX_STANDARD: 14
– Required features: cxx_variadic_templates
– GCC 7.5.0: Adding gcc and gcc_s libs to link line
– NUMA paths:
– /usr/include
– /usr/lib/aarch64-linux-gnu/libnuma.so
– Using ATen parallel backend: OMP
– Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR)
– Configuring build for SLEEF-v3.4.0
Target system: Linux-4.9.140-tegra
Target processor: aarch64
Host system: Linux-4.9.140-tegra
Host processor: aarch64
Detected C compiler: GNU @ /usr/bin/cc
– Using option -Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math to compile libsleef
– Building shared libs : OFF
– MPFR : LIB_MPFR-NOTFOUND
– GMP : LIBGMP-NOTFOUND
– RT : /usr/lib/aarch64-linux-gnu/librt.so
– FFTW3 : LIBFFTW3-NOTFOUND
– OPENSSL :
– SDE : SDE_COMMAND-NOTFOUND
– RUNNING_ON_TRAVIS : 0
– COMPILER_SUPPORTS_OPENMP : 1
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: /home/v/mypytorch/pytorch/build/aten/src/ATen/core/TensorBody.h
– NCCL operators skipped due to no CUDA support
– Excluding FakeLowP operators
– Excluding ideep operators as we are not using ideep
– Excluding image processing operators due to no opencv
– Excluding video processing operators due to no opencv
– Include Observer library
– /usr/bin/c++ /home/uname/mypytorch/pytorch/caffe2/…/torch/abi-check.cpp -o /home/uname/mypytorch/pytorch/build/abi-check
– Determined _GLIBCXX_USE_CXX11_ABI=1
– MPI_INCLUDE_PATH: /usr/lib/aarch64-linux-gnu/openmpi/include/openmpi;/usr/lib/aarch64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent;/usr/lib/aarch64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/include;/usr/lib/aarch64-linux-gnu/openmpi/include
– MPI_LIBRARIES: /usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi_cxx.so;/usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi.so
– MPIEXEC: /usr/bin/mpiexec
– pytorch is compiling with OpenMP.
OpenMP CXX_FLAGS: -fopenmp.
OpenMP libraries: /usr/lib/gcc/aarch64-linux-gnu/7/libgomp.so;/usr/lib/aarch64-linux-gnu/libpthread.so.
– Caffe2 is compiling with OpenMP.
OpenMP CXX_FLAGS: -fopenmp.
OpenMP libraries: /usr/lib/gcc/aarch64-linux-gnu/7/libgomp.so;/usr/lib/aarch64-linux-gnu/libpthread.so.
– Using lib/python3/dist-packages as python relative installation path
CMake Warning at CMakeLists.txt:690 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.


– ******** Summary ********
– General:
– CMake version : 3.10.2
– CMake command : /usr/bin/cmake
– System : Linux
– C++ compiler : /usr/bin/c++
– C++ compiler id : GNU
– C++ compiler version : 7.5.0
– BLAS : MKL
– CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow
– Build type : Release
– Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;HAVE_MMAP=1;FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
– CMAKE_PREFIX_PATH : /usr/lib/python3/dist-packages;/usr/local/cuda
– CMAKE_INSTALL_PREFIX : /home/uname/mypytorch/pytorch/torch

– TORCH_VERSION : 1.6.0
– CAFFE2_VERSION : 1.6.0
– BUILD_CAFFE2_MOBILE : OFF
– USE_STATIC_DISPATCH : OFF
– BUILD_BINARY : OFF
– BUILD_CUSTOM_PROTOBUF : ON
– Link local protobuf : ON
– BUILD_DOCS : OFF
– BUILD_PYTHON : True
– Python version : 3.6.9
– Python executable : /usr/bin/python3
– Pythonlibs version : 3.6.9
– Python library : /usr/lib/libpython3.6m.so.1.0
– Python includes : /usr/include/python3.6m
– Python site-packages: lib/python3/dist-packages
– BUILD_CAFFE2_OPS : ON
– BUILD_SHARED_LIBS : ON
– BUILD_TEST : True
– BUILD_JNI : OFF
– INTERN_BUILD_MOBILE :
– USE_ASAN : OFF
– USE_CUDA : ON
– CUDA static link : OFF
– USE_CUDNN : ON
– CUDA version : 10.2
– cuDNN version : 8.0.0
– CUDA root directory : /usr/local/cuda
– CUDA library : /usr/local/cuda/lib64/stubs/libcuda.so
– cudart library : /usr/local/cuda/lib64/libcudart.so
– cublas library : /usr/lib/aarch64-linux-gnu/libcublas.so
– cufft library : /usr/local/cuda/lib64/libcufft.so
– curand library : /usr/local/cuda/lib64/libcurand.so
– cuDNN library : /usr/lib/aarch64-linux-gnu/libcudnn.so
– nvrtc : /usr/local/cuda/lib64/libnvrtc.so
– CUDA include path : /usr/local/cuda/include
– NVCC executable : /usr/local/cuda/bin/nvcc
– NVCC flags : -DONNX_NAMESPACE=onnx_torch;-gencode;arch=compute_72,code=sm_72;-Xcudafe;–diag_suppress=cc_clobber_ignored;-Xcudafe;–diag_suppress=integer_sign_change;-Xcudafe;–diag_suppress=useless_using_declaration;-Xcudafe;–diag_suppress=set_but_not_used;-Xcudafe;–diag_suppress=field_without_dll_interface;-Xcudafe;–diag_suppress=base_class_has_different_dll_interface;-Xcudafe;–diag_suppress=dll_interface_conflict_none_assumed;-Xcudafe;–diag_suppress=dll_interface_conflict_dllexport_assumed;-Xcudafe;–diag_suppress=implicit_return_from_non_void_function;-Xcudafe;–diag_suppress=unsigned_compare_with_zero;-Xcudafe;–diag_suppress=declared_but_not_referenced;-Xcudafe;–diag_suppress=bad_friend_decl;-std=c++14;-Xcompiler;-fPIC;–expt-relaxed-constexpr;–expt-extended-lambda;-Wno-deprecated-gpu-targets;–expt-extended-lambda;-gencode;arch=compute_72,code=sm_72;-Xcompiler;-fPIC;-DCUDA_HAS_FP16=1;-D__CUDA_NO_HALF_OPERATORS
_;-D__CUDA_NO_HALF_CONVERSIONS__;-D__CUDA_NO_HALF2_OPERATORS__
– CUDA host compiler : /usr/bin/cc
– NVCC --device-c : OFF
– USE_TENSORRT : OFF
– USE_ROCM : OFF
– USE_EIGEN_FOR_BLAS : ON
– USE_FBGEMM : OFF
– USE_FAKELOWP : OFF
– USE_FFMPEG : OFF
– USE_GFLAGS : OFF
– USE_GLOG : OFF
– USE_LEVELDB : OFF
– USE_LITE_PROTO : OFF
– USE_LMDB : OFF
– USE_METAL : OFF
– USE_MKL : OFF
– USE_MKLDNN : OFF
– USE_NCCL : 0
– USE_NNPACK : ON
– USE_NUMPY : ON
– USE_OBSERVERS : ON
– USE_OPENCL : OFF
– USE_OPENCV : OFF
– USE_OPENMP : ON
– USE_TBB : OFF
– USE_VULKAN : OFF
– USE_PROF : OFF
– USE_QNNPACK : ON
– USE_PYTORCH_QNNPACK : ON
– USE_REDIS : OFF
– USE_ROCKSDB : OFF
– USE_ZMQ : OFF
– USE_DISTRIBUTED : ON
– USE_MPI : ON
– USE_GLOO : ON
– USE_TENSORPIPE : ON
– Public Dependencies : Threads::Threads
– Private Dependencies : cpuinfo;qnnpack;pytorch_qnnpack;nnpack;XNNPACK;/usr/lib/aarch64-linux-gnu/libnuma.so;fp16;/usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi_cxx.so;/usr/lib/aarch64-linux-gnu/openmpi/lib/libmpi.so;gloo;tensorpipe;aten_op_header_gen;foxi_loader;rt;fmt::fmt-header-only;gcc_s;gcc;dl
– Configuring incomplete, errors occurred!
See also “/home/uname/mypytorch/pytorch/build/CMakeFiles/CMakeOutput.log”.
See also “/home/uname/mypytorch/pytorch/build/CMakeFiles/CMakeError.log”.
Traceback (most recent call last):
File “setup.py”, line 732, in
build_deps()
File “setup.py”, line 316, in build_deps
cmake=cmake)
File “/home/uname/mypytorch/pytorch/tools/build_pytorch_libs.py”, line 59, in build_caffe2
rerun_cmake)
File “/home/uname/mypytorch/pytorch/tools/setup_helpers/cmake.py”, line 329, in generate
self.run(args, env=my_env)
File “/home/uname/mypytorch/pytorch/tools/setup_helpers/cmake.py”, line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File “/usr/lib/python3.6/subprocess.py”, line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘[‘cmake’, ‘-GNinja’, ‘-DBUILD_PYTHON=True’, ‘-DBUILD_TEST=True’, ‘-DCMAKE_BUILD_TYPE=Release’, ‘-DCMAKE_INSTALL_PREFIX=/home/uname/mypytorch/pytorch/torch’, ‘-DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages’, ‘-DNUMPY_INCLUDE_DIR=/usr/lib/python3/dist-packages/numpy/core/include’, ‘-DPYTHON_EXECUTABLE=/usr/bin/python3’, ‘-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m’, ‘-DPYTHON_LIBRARY=/usr/lib/libpython3.6m.so.1.0’, ‘-DTORCH_BUILD_VERSION=1.6.0a0+3e957d0’, ‘-DUSE_NUMPY=True’, ‘/home/uname/mypytorch/pytorch’]’ returned non-zero exit status 1.

Any help and advice will be highly appreciated! :)

Hi @torsteinr, it is appears that is the error, but I have not seen it before.

When you cloned the PyTorch repo, did you clone it with the --recursive flag? That is the only thing I can find about this error when searching.

Yea, I cloned with the --recursive flag. Anyways, I decided to try cloning again, and ended up looking more at the branches and tags. This time I cloned v1.6.0-rc7, and building with ‘python3 tools/build_libtorch.py’ worked fine!

Thanks for the help :)