I am having some trouble getting torch to work with cuda support on my Jetson Orin Nano. I am using Jetpack 6.2 (Ubuntu 22.04 and CUDA 12.6), Python 3.10.12, and
kevin@kevin-desktop:~/Downloads$ ls -ll /usr/lib/aarch64-linux-gnu/tegra/libnvdla*
-rw-r–r-- 1 root root 8159168 Jan 7 20:41 /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so
-rw-r–r-- 1 root root 6499168 Jan 7 20:41 /usr/lib/aarch64-linux-gnu/tegra/libnvdla_runtime.so
But nonetheless I can’t get torch and cuda to hold hands and sing Kumbaya. I’ve double checked the compatibility matrix here: Start Locally | PyTorch , tried building from source, triple checked python version, cuda version, and set every flag I could find, but no luck. The error I got when building from source was:
ModuleNotFoundError: No module named ‘typing_extensions’
CMake Error at cmake/Codegen.cmake:225 (message):
Failed to get generated_headers list
Call Stack (most recent call first):
caffe2/CMakeLists.txt:2 (include)
but of course type_extensions was already installed. No idea where to go from here.