Links i have consulted, but no avail
- (SOLVED)libcudart.so: error adding symbols: File in wrong format - #9 by imugly1029
- Dji Manifold2-G、Jetson TX2源码编译安装pytorch - 代码先锋网
- subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '7']' returned non-zero exit status 1 · Issue #20420 · pytorch/pytorch · GitHub
What i ran
USE_MKLDNN=0 USE_QNNPACK=0 USE_NNPACK=0 USE_DISTRIBUTED=0 BUILD_TEST=0 python setup.py bdist_wheel
using Dji Manifold2-G、Jetson TX2源码编译安装pytorch - 代码先锋网 this as a guide.
Errros are as follows
/usr/local/cuda/lib64/libcudnn.so: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/torch_global_deps.dir/build.make:104: recipe for target 'lib/libtorch_global_deps.so' failed
make[2]: *** [lib/libtorch_global_deps.so] Error 1
CMakeFiles/Makefile2:2295: recipe for target 'caffe2/CMakeFiles/torch_global_deps.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/torch_global_deps.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
and also
Makefile:145: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "setup.py", line 745, in <module>
build_deps()
File "setup.py", line 311, in build_deps
build_caffe2(version=version,
File "/media/dji/80GBstore/pyenvs/newpy38/pytorch1.5/tools/build_pytorch_libs.py", line 62, in build_caffe2
cmake.build(my_env)
File "/media/dji/80GBstore/pyenvs/newpy38/pytorch1.5/tools/setup_helpers/cmake.py", line 339, in build
self.run(build_args, my_env)
File "/media/dji/80GBstore/pyenvs/newpy38/pytorch1.5/tools/setup_helpers/cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '6']' returned non-zero exit status 2.
Other system information
DJI Manifold 2
Distributor ID: Ubuntu
Description: Ubuntu 16.04.7 LTS
Release: 16.04
Codename: xenial
8 GB memory
NVIDIA Jetson TX2
ARMv8 Processor rev 3 (v8l) × 4 ARMv8 Processor rev 0 (v8l) × 2
NVIDIA Tegra X2 (nvgpu)/integrated
64-bit
deviceQuery output
/usr/local/cuda/samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVIDIA Tegra X2"
CUDA Driver Version / Runtime Version 9.0 / 9.0
CUDA Capability Major/Minor version number: 6.2
Total amount of global memory: 7839 MBytes (8219348992 bytes)
( 2) Multiprocessors, (128) CUDA Cores/MP: 256 CUDA Cores
GPU Max Clock rate: 1301 MHz (1.30 GHz)
Memory Clock rate: 1600 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 524288 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: Yes
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 9.0, NumDevs = 1
Result = PASS
Output from nvcc –version
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Sun_Nov_19_03:16:56_CST_2017
Cuda compilation tools, release 9.0, V9.0.252
CUDA version
cat /usr/local/cuda/version.txt
CUDA Version 9.0.252
Output from uname -a
uname -a
Linux manifold2 4.4.38+ #2 SMP PREEMPT Mon Jun 3 20:19:02 CST 2019 aarch64 aarch64 aarch64 GNU/Linux
Output from “head -n 1 /etc/nv_tegra_release”
“head -n 1 /etc/nv_tegra_release
# R28 (release), REVISION: 2.1, GCID: 11272647, BOARD: t186ref, EABI: aarch64, DATE: Thu May 17 07:29:06 UTC 2018
Understand that i am using a very rarely used embedded computer. I have posted on Nvidia forums and after a series of exchanges i was advised to simply install from source.
However, when trying to install from source, i ran into issues aas well
Versions
PyTorch version: 1.10.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 16.04.7 LTS (aarch64)
GCC version: (Ubuntu/Linaro 5.5.0-12ubuntu1~16.04) 5.5.0 20171010
Clang version: Could not collect
CMake version: version 3.22.0
Libc version: glibc-2.23
Python version: 3.8.9 (default, Apr 3 2021, 01:02:10) [GCC 5.4.0 20160609] (64-bit runtime)
Python platform: Linux-4.4.38+-aarch64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: 9.0.252
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.7.1.5
/usr/local/cuda-9.0/targets/aarch64-linux/lib/libcudnn.so.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.1
[pip3] torch==1.10.1
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] Could not collect
Sorry for the long codes.