Jetpack version: 4.6.1
I was able to install pytorch 1.11 but get this error.
python3 -c "import torch";
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/tahir/.local/lib/python3.6/site-packages/torch/__init__.py", line 198, in <module>
_load_global_deps()
File "/home/tahir/.local/lib/python3.6/site-packages/torch/__init__.py", line 151, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/usr/lib/python3.6/ctypethe solution was not s/__init__.py", line 348, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libomp.so: cannot open shared object file: No such file or directory
I set the environment variable export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64 as shown here. But I was unable to find libomp.so file in /usr/local/cuda-10.2/lib64/.
I found a post with the same error but without the solution.
sudo apt-get install libomp-dev seems to have fixed the file not found issue. reference
However, it says that it does not support the GPU on my board.
>>> import torch
>>> torch.__version__
'1.11.0a0+17540c5'
>>> torch.cuda.get_device_name()
/home/tahir/.local/lib/python3.6/site-packages/torch/cuda/__init__.py:121: UserWarning:
Found GPU0 NVIDIA Tegra X1 which is of cuda capability 5.3.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 6.2.
warnings.warn(old_gpu_warn % (d, name, major, minor, min_arch // 10, min_arch % 10))
/home/tahir/.local/lib/python3.6/site-packages/torch/cuda/__init__.py:144: UserWarning:
NVIDIA Tegra X1 with CUDA capability sm_53 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_62 sm_72.
If you want to use the NVIDIA Tegra X1 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
'NVIDIA Tegra X1'
Does this mean that GPU won’t work with this version of pytorch?